This assignment counts for 10% of the course grade.
Assignments turned in after the deadline but before March 5 are subject to a 20% grade penalty.
In this assignment you will write perceptron classifiers (vanilla and averaged) to identify hotel reviews as either truthful or deceptive, and either positive or negative. You may use the word tokens as features, or any other features you can devise from the text. The assignment will be graded based on the performance of your classifiers, that is how well they perform on unseen test data compared to the performance of a reference classifier.
The training and development data are the same as for Coding Exercise 3, and are available as a compressed ZIP archive on Blackboard. The uncompressed archive contains the following files:
The submission script will train your model on part of the training data, and report results on the remainder of the training data (reserved as development data; see below). The grading script will train your model on all of the training data, and test the model on unseen data in a similar format. The directory structure and file names of the test data will be masked so that they do not reveal the labels of the individual test files.
The perceptron algorithms appear in Hal Daumé III, A Course in Machine Learning (v. 0.99 draft), Chapter 4: The Perceptron.
You will write two programs in Python 3
(Python 2 has been deprecated):
perceplearn.py
will learn perceptron models (vanilla and
averaged) from the training data, and
percepclassify.py
will use the models to classify new data.
You are encouraged to reuse your own code from Coding
Exercise 3 for reading the data and writing the output, so that
you can concentrate on implementing the classification algorithm.
The learning program will be invoked in the following way:
> python perceplearn.py /path/to/input
The argument is the directory of the training data; the program
will learn perceptron models, and write the model parameters to two files:
vanillamodel.txt
for the vanilla perceptron, and
averagedmodel.txt
for the averaged perceptron.
The format of the model files is up to
you, but they should follow the following guidelines:
percepclassify.py
to successfully label new data.
The classification program will be invoked in the following way:
> python percepclassify.py /path/to/model /path/to/input
The first argument is the path to the model file (vanillamodel.txt
or
averagedmodel.txt
), and the second argument is the path
to the directory of the test data file; the program
will read the parameters of a perceptron model from the model file,
classify each entry in the test data, and
write the results to a text file called percepoutput.txt
in
the following format:
label_a label_b path1
label_a label_b path2
⋮
In the above format, label_a
is either
“truthful” or “deceptive”,
label_b
is either “positive” or
“negative”, and pathn
is the path of
the text file being classified.
Note that in the training data, it is trivial to infer the labels from the directory names in the path. However, directory names in the development and test data on Vocareum will be masked, so the labels cannot be inferred this way.
All submissions will be completed through Vocareum; please consult the instructions for how to use Vocareum.
Multiple submissions are allowed; only the final submission will be
graded. Each time you submit, a submission script is invoked. The
submission script uses a specific portion of the training data
as development data; it trains your model on the remaining training
data, runs your classifier on the development data, and reports the results.
Do not include the data in your submission: the submission script
reads the data from a central directory, not from your
personal directory.
You should only upload your program files to Vocareum, that is
percepclassify.py
and perceplearn.py
(plus any
required auxiliary files, such as code shared between the programs or
a word list that you wrote yourself).
You are encouraged to submit early and often in order to iron out any problems, especially issues with the format of the final output.
The performance of you classifier will be measured automatically; failure to format your output correctly may result in very low scores, which will not be changed.
For full credit, make sure to submit your assignment well before the deadline. The time of submission recorded by the system is the time used for determining late penalties. If your submission is received late, whatever the reason (including equipment failure and network latencies or outages), it will incur a late penalty.
If you have any issues with Vocareum with regards to logging in, submission, code not executing properly, etc., please make a post on Piazza so the instructional team can look into the issue.
After the due date, we will train your model on the full training data (including development data), run your classifier on unseen test data twice (once with the vanilla model, and once with the averaged model), and compute the F1 score of your output compared to a reference annotation for each of the four classes (truthful, deceptive, positive, and negative). Your grade will be based on the performance of your classifier. We will calculate the mean of the four F1 scores for each model and scale it to the performance of a perceptron classifier developed by the instructional staff (so if that classifier has F1=0.8, then a score of 0.8 will receive a full credit, and a score of 0.72 will receive 90% credit; your vanilla perceptron will be compared to a reference vanilla perceptron, and your averaged perceptron will be compared to a reference averaged perceptron). The overall grade will be the mean of the grades for the vanilla and averaged perceptrons.
Note that the measure for grading is the macro-average over classes; macro- and micro-averaging are explained in Manning, Raghavan and Schutze, Introduction to information retrieval, Chapter 13: Text classification and Naive Bayes. For more information on F1, see Manning, Raghavan and Schutze, Introduction to information retrieval, Chapter 8: Evaluation in information retrieval.
perceplearn.py
on a directory containing only folds 2, 3,
and 4, and it will run percepclassify.py
on a directory
with a modified version of fold 1, where directory and file names
are masked. While developing on your own you may use
different splits of the data (but to get the same results
as the submission script, you'll need to use the same split).
The grading script will use all 4
folds for training, and unseen data for testing.
perceplearn.py
on the training data, running on a
MacBook Pro from 2016.