Linux was my stepping stone into the world of big data. I still remember the very first class that made the instructor take about 3 hours to help us INSTALL Hadoop. The program itself was beast difficult, but the result was sweet. It took me weeks to figure how to install all those applications that accompany Hadoop in performing its magics. I have later switched to Spark for various reasons, but still visit the Ubuntu environment from time to time to experience its ultra sleek interface with lightning fast performance. Here are some basic commands that new users will find handy to get started with:
some fun commands I liked to run before going to the main course:
Basics
$ whoami
and
$ finger
Running a command with superuser control
$ sudo -c “mkdir /etc/test”
Making a text file using the cat comand: