Experiment
Twitter processes half a billion tweets per day. The size of a tweet is max 140 characters. For experimentation purpose, Twitter provides these tweets to the public. Initially they were providing it through REST API and then moved onto OAuth. Twitter Streaming API makes this possible, and provides real-time streaming tweets at 1% of the Twitter's load. So that means, on an average Twitter receives 5787 ( 500000000 Tweets / (24 Hours * 60 Mins * 60 Secs) ) Tweets per second and the Streaming API sends us 58 Tweets per second. And this data is arriving constantly, so we need a mechanism to store 58 tweets every second and run real-time analytics on them. Even though each Tweet is 140 characters or 280 Bytes ( Char is 2 bytes ), the Streaming-API sends us a lot of information for each Tweet (1 Tweet = 4 KB). This information is sent in the JSON format.
The Twitter data provides a very valuable tool in the field of marketing. Vendors can do sentiment analysis using Tweets for a specific hashTag. So if Samsung wants to know how content people are about their products, they can do so with the Tweet data. As a result a lot of NLP (Natural Language Processing) field researches have started. Apart from this, we can do a lot of machine learning tasks on these Tweets.
The Twitter data provides a very valuable tool in the field of marketing. Vendors can do sentiment analysis using Tweets for a specific hashTag. So if Samsung wants to know how content people are about their products, they can do so with the Tweet data. As a result a lot of NLP (Natural Language Processing) field researches have started. Apart from this, we can do a lot of machine learning tasks on these Tweets.
As part of this experiment I implemented a Consumer to read the stream of JSON tweets and persist in MongoDB. Since its a write-heavy application, I load-balanced (Sharded) my MongoDB. This application keeps running forever. Now the data starts filling up my MongoDB clusters. To keep the storage minimum, I extracted and stored only TweetID, Tweeter Name, Actual Tweet, the Source of the Tweet ( eg. twitter.com, facebook.com, blackberry, etc). Then I setup a MongoDB incremental Map-Reduce job to run every 10 minutes. This job gets the statistics of unique sources and their counts. From this I generated the top 10 statistics and create chart using JFreeChart.
Architectural Overview
Execution
Setup Twitter Account
Goto https://dev.twitter.com/apps and click "Create a new application".
Fill up all mandatory information and submit the application
Goto the tab "OAuth Tool" and note down the Consumer Key and Consumer Secret.
Run the following program by changing the consumer key and consumer secret
Follow the instructions of the program to generate the Access Token and Access Token Secret.
It can later be obtained from the "OAuth Tool" tab.
Setup MongoDB
MongoDB provides sharding capability on database/collections. So I setup a simple MongoDB sharding setup with 2 Laptops - 1 TB, 4 GB RAM, Toshiba Windows 7
System-1 : 192.168.1.100 System-2 : 192.168.1.101
MongoDB 2.2.3 is installed in both the laptops at c:\\apps\mongodb.
Create directories in System-1 c:\\apps\mongodb\data1 c:\\apps\mongodb\data2 c:\\apps\mongodb\data3 c:\\apps\mongodb\conf On System1 c:\apps\mongodb\bin> mongod --shardsvr --dbpath c:\apps\mongodb\data1 --port 27020
On System2 c:\apps\mongodb\bin> mongod --shardsvr --dbpath c:\apps\mongodb\data1 --port 27020
On System1 c:\apps\mongodb\bin> mongod --shardsvr --dbpath c:\apps\mongodb\data2 --port 27021
c:\apps\mongodb\bin> mongod --configsvr --dbpath c:\apps\mongodb\conf --port 27022
c:\apps\mongodb\bin> mongos --configsvr --configdb 192.168.1.100:27020,192.168.1.100:27021,192.168.1.101:27020 --port 27017
c:\apps\mongodb\bin> mongo 192.168.1.100:27017 mongos> use admin switched to admin mongos> db.runCommand({addShard:"192.168.1.100:27020"}); {"shardAdded": "shard0000", "ok": 1} mongos> db.runCommand({addShard:"192.168.1.100:27021"}); {"shardAdded": "shard0001", "ok": 1} mongos> db.runCommand({addShard:"192.168.1.101:27020"}); {"shardAdded": "shard0002", "ok": 1} mongos> db.runCommand({listShards: 1}) { "shards" : [ { "_id" : "shard0000", "host" : "192.168.1.100:27020" }, { "_id" : "shard0001", "host" : "192.168.1.100:27021" }, { "_id" : "shard0002", "host" : "192.168.1.101:27020" } ] "ok" : 1 } mongos> use twitterdb switched to db twitterdb mongos> db.createCollection("tweets") {"ok" : 1} mongos> use admin switched to db admin mongos> db.runCommand({enableSharding: "twitterdb"}) { "ok" : 1} mongos> db.runCommand({shardCollection: "twitterdb.tweets", key: {id_str: 1}}) {"collectionSharded" : "twitterdb.tweets", "ok" : 1} mongos> use twitterdb switched to db twitterdb mongos> db.tweets.find().count() 0
Running the Application
So we have just finished setting up the Shards and the database setup.
RUN The below Twitter Stream Application below (please change the appropriate values as per your settings). Data Starts pumping into MongoDB. Don't forget to stop the application when you are done, else twitter stream consumes network bandwidth and the mongodb storage will shoot up.
System:1
c:\apps\mongodb\data1
c:\apps\mongodb\data2
System:2
c:\apps\mongodb\data1
The data files grows continously. Verify the counts of the tweets in the database.
mongos> db.tweets.find().count()
25043
So 25,000 tweets accumulated in 10 mins or 40 tweets per second Find out how many people tweeted in this 10 minutes using web
mongos> db.tweets.find({"source" : "web"}).count()
4365
Now RUN the below MapReduce Job to run every 10 minutes and aggregate the results and generate the reports as Pie-Chart. These charts will be stored in your local file system.
Download
The file can be downloaded here containing the entire project.
i have some doubts regarding the above mentioned experiment need a bit of help can anyone please guide me for the same
ReplyDelete