TAs: Alison, Hari
Goal: Give students understanding of how agents that can learn from their actions are able to solve problems.
Implementation: Still many unknowns. Perhaps inverted pendulum or other common problem that is tough to solve exactly but that RL agents do well on.
Timeline:
Out: Thursday 4/1
In: ???
CSCI 141: Artificial Intelligence
Friday, November 1, 2013
Project 3: Bayesian Spam Classifcation
TA: Pat
Goal: Give students understanding of how classifiers work in general and probabilistic methods used to tackle problems like classifying email as spam or ham
Implementation: Hopefully relying on the code from last year to serve the data, the students would have to extract features from the emails, write code to train & test the model and then evaluate results. Perhaps start off with having the students write their own simple classifier (i.e. if includes "viagra" spam, otherwise ham) in order to understand the classification problem in general and that multiple approaches exist.
Timeline:
Out: Tuesday 3/11
In: ???
Project Outline
Files they need:
-stencil code
Files on our end:
-enron email text w/ labels
Outline:
Overview:
- explain classification problem (using X and y)
- X and y
-training
-validation
-test(our grading)
- explain Enron
- overview the 3 parts
Part 1)
Feature Extraction
- features are words (should we touch on other potential features(i.e. sender, wordcount)?)
- given the emails, create a dictionary for easy computation of naive bayes
Part 2)
Classification in general
- there is no one perfect classifier
- try making your own
- one option: pick a word you expect to only appear in spam and use that as the rule
- report the results
Part 3)
Naive bayes
- show them the formula againa on the handout
-explain why marginal doesn't matter
-impress that since they are only proportional to likelihood, the probabilities will not sum to 1
- (tell them to take whichever has a larger posterior) - how much do we want them to figure out on their own?
- more about how we want to make dedcusions, not just minimizing error (ROC curves)
- report the 5 words most associated with each class - we did this in 142 but the results seemed poor. Maybe clean the data to only include words that occur a few times.
Should we include this? need to test with Enron emails:
Part 4)
Different features
- use a bigram model
- explain what this is in NLP
- explain start/stop words
- otherwise the naive bayes should work the same way
Goal: Give students understanding of how classifiers work in general and probabilistic methods used to tackle problems like classifying email as spam or ham
Implementation: Hopefully relying on the code from last year to serve the data, the students would have to extract features from the emails, write code to train & test the model and then evaluate results. Perhaps start off with having the students write their own simple classifier (i.e. if includes "viagra" spam, otherwise ham) in order to understand the classification problem in general and that multiple approaches exist.
Timeline:
Out: Tuesday 3/11
In: ???
Project Outline
Files they need:
-stencil code
Files on our end:
-enron email text w/ labels
Outline:
Overview:
- explain classification problem (using X and y)
- X and y
-training
-validation
-test(our grading)
- explain Enron
- overview the 3 parts
Part 1)
Feature Extraction
- features are words (should we touch on other potential features(i.e. sender, wordcount)?)
- given the emails, create a dictionary for easy computation of naive bayes
Part 2)
Classification in general
- there is no one perfect classifier
- try making your own
- one option: pick a word you expect to only appear in spam and use that as the rule
- report the results
Part 3)
Naive bayes
- show them the formula againa on the handout
-explain why marginal doesn't matter
-impress that since they are only proportional to likelihood, the probabilities will not sum to 1
- (tell them to take whichever has a larger posterior) - how much do we want them to figure out on their own?
- more about how we want to make dedcusions, not just minimizing error (ROC curves)
- report the 5 words most associated with each class - we did this in 142 but the results seemed poor. Maybe clean the data to only include words that occur a few times.
Should we include this? need to test with Enron emails:
Part 4)
Different features
- use a bigram model
- explain what this is in NLP
- explain start/stop words
- otherwise the naive bayes should work the same way
Project 2: Localization
TAs: Miles, Kurt
Goal: Familiarity with probability concepts & Bayes nets, show how more information leads to less uncertainty
Implementation: The localization problem of robotics: using a map of the first floor of the CIT paired with associated LIDAR data, determine the current location.
Overview
Part I:
Implement a naive, stateless Kalman Filter localization function. Given a map of the first floor of the CIT and a set of LIDAR data points, determine the most likely position of the robotic agent. Assume all positions are equally likely a priori. The naive Kalman Filter should NOT keep track of previous positions, taking into account solely the LIDAR data given on a single step. On one hundred randomly started trials, what is the average maximum likelihood achieved? What is the average accuracy of the stateless Kalman Filter over these one hundred trials (how often is the most likely position the true position)?
Part 2:
Implement a stateful Kalman Filter localization function. (Ref: http://imgur.com/KrWq9NI). This function will be used by the step() function provided in the source code that will perform a 'step' of the agent in the CIT domain, providing a new set of LIDAR data. Your stateful Kalman Filter function should keep track of the LIDAR data yielded by previous calls to the step function. On ten randomly started trials, how many 'steps' does it take to reach a likelihood of 0.5 for a single position? 0.7? 0.9?. Assume all positions are equal likely a priori.
Part 3:
Implement a stateful Kalman Filter localization function that does not assume all positions are equally likely a priori. On ten randomly started trials, how many 'steps' does it take to reach a likelihood of 0.5 for a single position? 0.7? 0.9?.
Extra Credit:
- Multiple floors
- Particle filtering
Timeline:
Out: Tuesday 2/11
In: Thursday 2/27
Goal: Familiarity with probability concepts & Bayes nets, show how more information leads to less uncertainty
Implementation: The localization problem of robotics: using a map of the first floor of the CIT paired with associated LIDAR data, determine the current location.
Overview
Part I:
Implement a naive, stateless Kalman Filter localization function. Given a map of the first floor of the CIT and a set of LIDAR data points, determine the most likely position of the robotic agent. Assume all positions are equally likely a priori. The naive Kalman Filter should NOT keep track of previous positions, taking into account solely the LIDAR data given on a single step. On one hundred randomly started trials, what is the average maximum likelihood achieved? What is the average accuracy of the stateless Kalman Filter over these one hundred trials (how often is the most likely position the true position)?
Part 2:
Implement a stateful Kalman Filter localization function. (Ref: http://imgur.com/KrWq9NI). This function will be used by the step() function provided in the source code that will perform a 'step' of the agent in the CIT domain, providing a new set of LIDAR data. Your stateful Kalman Filter function should keep track of the LIDAR data yielded by previous calls to the step function. On ten randomly started trials, how many 'steps' does it take to reach a likelihood of 0.5 for a single position? 0.7? 0.9?. Assume all positions are equal likely a priori.
Part 3:
Implement a stateful Kalman Filter localization function that does not assume all positions are equally likely a priori. On ten randomly started trials, how many 'steps' does it take to reach a likelihood of 0.5 for a single position? 0.7? 0.9?.
Extra Credit:
- Multiple floors
- Particle filtering
Timeline:
Out: Tuesday 2/11
In: Thursday 2/27
Project 1: Search
TAs: Alison, Hari
Goal: Give an idea of different search algorithms and compare efficiencies
Implementation: Perhaps very similar to last year, maybe reusing the Pacman code from Berkeley
Timeline:
Out: Thursday 1/28
In: Thursday 2/6
Goal: Give an idea of different search algorithms and compare efficiencies
Implementation: Perhaps very similar to last year, maybe reusing the Pacman code from Berkeley
Timeline:
Out: Thursday 1/28
In: Thursday 2/6
Subscribe to:
Posts (Atom)