i am trying to run a wordcount job in hadoop.but always getting a class not found exception.I am posting the class that i wrote and the command i using to run the job

package org.gamma; import java.util.*; import org.apache.hadoop.mapreduce.lib.input.TextInputFor mat; import org.apache.hadoop.mapreduce.lib.output.TextOutputF ormat; import org.apache.hadoop.mapreduce.Job; import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.fs.Path; import org.apache.hadoop.io.Text; import org.apache.hadoop.io.IntWritable; public class WordCount { public static int main(String[] args) throws Exception { // if we got the wrong number of args, then exit if (args.length != 4 || !args[0].equals ("-r")) { System.out.println("usage: WordCount -r "); return -1; } // Get the default configuration object Configuration conf = new Configuration (); // now create the MapReduce job Job job = new Job (conf); job.setJobName ("WordCount"); // we'll output text/int pairs (since we have words as keys and counts as values) job.setMapOutputKeyClass (Text.class); job.setMapOutputValueClass (IntWritable.class); // again we'll output text/int pairs (since we have words as keys and counts as values) job.setOutputKeyClass (Text.class); job.setOutputValueClass (IntWritable.class); // tell Hadoop the mapper and the reducer to use job.setMapperClass (WordCountMapper.class); job.setCombinerClass (WordCountReducer.class); job.setReducerClass (WordCountReducer.class); // we'll be reading in a text file, so we can use Hadoop's built-in TextInputFormat job.setInputFormatClass (TextInputFormat.class); // we can use Hadoop's built-in TextOutputFormat for writing out the output text file job.setOutputFormatClass (TextOutputFormat.class); // set the input and output paths TextInputFormat.setInputPaths (job, args[2]); TextOutputFormat.setOutputPath (job, new Path (args[3])); // set the number of reduce paths try { job.setNumReduceTasks (Integer.parseInt (args[1])); } catch (Exception e) { System.out.println("usage: WordCount -r "); return -1; } // force the mappers to handle one megabyte of input data each TextInputFormat.setMinInputSplitSize (job, 1024 * 1024); TextInputFormat.setMaxInputSplitSize (job, 1024 * 1024); // this tells Hadoop to ship around the jar file containing "WordCount.class" to all of the different // nodes so that they can run the job job.setJarByClass(WordCount.class); // submit the job and wait for it to complete! int exitCode = job.waitForCompletion (true) ? 0 : 1; return exitCode; } } package org.gamma; import java.io.IOException; import java.util.regex.Pattern; import java.util.regex.Matcher; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.LongWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Mapper; public class WordCountMapper extends Mapper { // create these guys up here for speed private final static IntWritable one = new IntWritable (1); private Text word = new Text(); // create a Pattern object to parse each line private final Pattern wordPattern = Pattern.compile ("[a-zA-Z][a-zA-Z0-9]+"); public void map (LongWritable key, Text value, Context context) throws IOException, InterruptedException { // put your code here! } } package org.gamma; import java.io.IOException; import org.apache.hadoop.io.IntWritable; import org.apache.hadoop.io.Text; import org.apache.hadoop.mapreduce.Reducer; public class WordCountReducer extends Reducer { public void reduce (Text key, Iterable values, Context context) throws IOException, InterruptedException { // put your code here!! } } the wordcount.jar is exported to my downloads folder And this is the command i use to run the job

jeet@jeet-Vostro-2520:~/Downloads$ hadoop jar wordcount.jar org.gamma.Wordcount /user/jeet/getty/gettysburg.txt /user/jeet/getty/out And this is the exception i am getting every time

Warning: $HADOOP_HOME is deprecated. Exception in thread "main" java.lang.ClassNotFoundException: org.gamma.Wordcount at java.net.URLClassLoader$1.run(URLClassLoader.java: 366) at java.net.URLClassLoader$1.run(URLClassLoader.java: 355) at java.security.AccessController.doPrivileged(Native Method) at java.net.URLClassLoader.findClass(URLClassLoader.j ava:354) at java.lang.ClassLoader.loadClass(ClassLoader.java:4 25) at java.lang.ClassLoader.loadClass(ClassLoader.java:3 58) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:270) at org.apache.hadoop.util.RunJar.main(RunJar.java:149 ) somebody please please help i think i am very close of it

Check Solution