{"id":1979,"date":"2014-02-20T03:23:19","date_gmt":"2014-02-20T03:23:19","guid":{"rendered":"https:\/\/notebooks.dataone.org\/?p=1979"},"modified":"2014-02-20T03:23:19","modified_gmt":"2014-02-20T03:23:19","slug":"consolidating-year-1-year-4-dataoneorg-tweets","status":"publish","type":"post","link":"https:\/\/notebooks.dataone.org\/data-science\/consolidating-year-1-year-4-dataoneorg-tweets\/","title":{"rendered":"Consolidating Year 1 – Year 4 @DataONEorg Tweets"},"content":{"rendered":"

I am continuing quality control efforts today.<\/p>\n

From looking at checksums for the files<\/a>, some of the 147 appear to be the same. This concerns me due to the possibility of human error (my error) in creating the files, since I scraped tweets manually with a browser extension, rather than via a computer programming language like Python.<\/p>\n

Even though the files should all be about the same size, since there are 10 tweets per page and 140 characters max per tweet, the presences of a few identical file sizes concerns me. \u00a0Still, it’s worth spot checking for some, such as “Y4060” and “Y3060” which are not very different.<\/p>\n

Opening up the folder where I put them all<\/a>, I’m going to spot check the few I identified yesterday as having identical file sizes, starting with Y4060 and Y3060.<\/p>\n

Y4060 starts with “DataFOUR” while Y3060 starts with “Mark Gollan.” \u00a0I’m relieved that these two, which seem the likeliest to have human error, are not the same files.<\/p>\n

I think the easiest thing to do is to collate all of the spreadsheets into one file, then see if there are any duplicates.<\/p>\n

Possible options I have in mind:<\/p>\n

a) Combine in Google Drive<\/p>\n

b) combine in desktop spreadsheet software (Excel, Open Office)<\/p>\n

c) combine CSV files.<\/p>\n

I did a google search for “combine spreadsheets in google drive” without quotes. Two results came up.<\/p>\n

I found a pretty nice explanation of a “range function” here:<\/p>\n

http:\/\/www.jellybend.com\/2013\/01\/10\/merge-multiple-google-documents\/<\/a><\/p>\n

The ID of the document is the string between \u201c..\/d\/\u201d and \u201c\/edit\u201d, in this example the ID is: \u201c1uWocGqA7Bifl61Vu8-TOWpALDHic8gbc5oCuZivZ1Dg\u201d. All you have to do is to put this ID from cell A5 of the Spreadsheet document…<\/p><\/blockquote>\n

Looks great, but obtaining the unique ID from all 147 documents is still kind of a pain.<\/p>\n

I made a copy of the spreadsheet to possibly use later:\u00a0https:\/\/docs.google.com\/spreadsheet\/ccc?key=0Av9TV1q9zxYudExVeV90bWNPcGxLZE9kTmctX3BNR1E#gid=0<\/p>\n

For the third option concerning merging CSV files, I did a google search for “merge multiple CSV files<\/a>” without quotes.\u00a0
\n<\/a><\/p>\n

The first result was interesting, “Merge CSV Files Into One Large CSV File In Windows 7 – Solve<\/a>.”<\/p>\n

I also saw a result at 6 for “How to merge multiple CSV-files into one with Mac OS X terminal<\/a>.”<\/p>\n

I am a bit annoyed that Google Drive won’t let me download my spreadsheets as raw CSV – is it possible there is a function for that? Worth looking… \u00a0an article mentions “Using the google drive API to download a spreadsheet in CSV format<\/a>”<\/p>\n

Again, I am still going to need the Document IDs, which is annoying, time consuming, and possibly error prone.<\/p>\n

docs.google.com\/feeds\/download\/spreadsheets\/Export?key<FILE_ID>&exportFormat=csv&gid=0\r\n<\/code><\/pre>\n

Since I'm on a Mac at the moment, I think the fastest way is going to be \"Merge multiple .csv files.\"<\/code><\/p>\n

I’d like to open up all 147 spreadsheets at the same time, then systematically save them as .csv.<\/p>\n

I created a new folder in “Documents” and “DataONE” on my local machine.<\/p>\n

I downloaded all 147 files as spreadsheets for Microsoft Excel. I’m optimistic my computer has enough memory to open up all 147 files at once, then systematically save them as .csv files. Another possibility: could I perhaps drag all 147 .xls files into TextWrangler<\/a>? I’ll try that. Definitely not. Had to force quit.<\/p>\n

At the risk of freezing up my computer, let me try and open all 147 files in Excel. Start 1:09 pm.<\/p>\n

Ended at about 1:14 – took a while to load all those files. \u00a0On the Mac I can just do Command + Shift + S to “save as” but then I have to select the filetype as .csv from a drop-down. \u00a0Somewhat annoying but it’s only 147 so it’s not implausible to do by hand.<\/p>\n

Honestly thought I just realized a problem with this approach – if I save as .csv, any file that contains a “,” character within the tweet will go into a new folder.<\/p>\n

Is it possible to save as a tab delimited file?<\/p>\n

I may have to end up collecting all of the 147 URLs anyway, to use the range function.<\/p>\n

Under “specialty formats” there is “tab delimited text file” – so that would work in terms of preserving comma content in the original tweets. \u00a0However, I still have the problem of merging all the files into one.<\/p>\n

For this reason, begrudgingly, I think will collect the 147 document IDs to use the import range function.<\/p>\n

I do have one question though. Can I change the tab-delimited text file to a .csv, to use the merge function in Terminal? Let me generate two tab-delimited text files to see what happens.<\/p>\n

I don’t think they are useful due to the comma content problem, so I am going to delete these .csv files that I just created:<\/p>\n

Y438; Y4370<\/p>\n

Now, with the three files Y4340, Y4350, Y4360 open in TextWrangler, I will see if I can systematically save them as .csv files. I’m just replacing .txt with .csv when I save the document.<\/p>\n

Key point: make sure there are actually commas in one of the documents to see how they behave:<\/p>\n

There are no commas in Y4350<\/p>\n

There are commas in Y4340<\/p>\n

There are commas in Y4360.<\/p>\n

So now I have my three files as .csv files. I’m moving them to a new folder to try and combine them all with the Mac OS X terminal method.<\/p>\n

I created a new folder within Documents\/DataONE\/CSV-files<\/p>\n

I moved the three files into this new folder.<\/p>\n

I opened up Terminal.<\/p>\n

I executed the command:<\/p>\n

cat *.csv >merged.csv<\/p>\n

Took a screen capture: Merged-csv-files<\/p>\n

What’s the result? I have a file called “merged.csv<\/a>”\u00a0with 31 rows of content.<\/p>\n

That makes sense since there is a title row, with 10 rows of tweets per file. \u00a0If I follow through for all 147 files, I should have approximately 1470 rows of tweets. I can then look and see how these are arranged.<\/p>\n

One problem is I am not sure how these will be ordered. \u00a0Let’s look at the three files to see how they came out.<\/p>\n

They are sequentially ordered, so I’m curious how the cat *.csv approached them.<\/p>\n

Row 2 in the new merged.csv spreadsheet contains:<\/p>\n

Matt Jones @metamattj \u00d2@djhocking Schildhauer: morpho software and knb (component of @DataONEorg) workflow to share and find data based on” EML metadata. #esa2013 6 months agoReplyRetweetFavorite<\/p>\n

Row 11 contains:<\/p>\n\n\n\n\n
kristina simonaityt_ @kristinasimona RT @JacquelynGill: Follow these tweeps in the #ESA2013 session on sharing in science: @ethanwhite @recology_ @metamattj @cjlortie @sandramc\u00c9 6 months agoReplyRetweetFavoriteTextRow 12 contains:<\/p>\n\n\n\n\n
Jacquelyn Gill @jacquelyngill Follow these tweeps in the #ESA2013 session on sharing in science: @ethanwhite @recology_ @metamattj @cjlortie @sandramchung @dataoneorg 6 months agoReplyRetweetFavorite1 moreRow 21 contains:<\/p>\n\n\n\n\n
Sandra M. Chung @sandramchung RT @DataONEorg: Looking fwd to ignite tomorrow.\u00a0 8am a bit early for rapid-fire presentation but a great line-up @NEON @NCEAS @cjlortie @re\u00c9 6 months agoReplyRetweetFavoriteTextRow 22 contains:<\/p>\n\n\n\n\n
Scott Chamberlain @recology_ RT @DataONEorg: Looking fwd to ignite tomorrow.\u00a0 8am a bit early for rapid-fire presentation but a great line-up @NEON @NCEAS @cjlortie @re\u00c9 6 months agoReplyRetweetFavoriteRow 31 contains:<\/p>\n\n\n\n\n
Leah A. Wasser @leahawasser Got a data mgmt plan? Great session now by @DataONEorg and @nceas\u00a0 #esa2013 6 months agoReplyRetweetFavorite1 moreI am a bit concerned about what these special characters are – for example “@re\u00c9” If characters are being altered or corrupted, that could impact sentiment analysis.A spot of good news is that commas will be preserved – notice “data, metadata, and download.” are preserved in one row.<\/p>\n\n\n\n\n
Daniel Hocking @djhocking Budden: can use nemercury @DataONEorg search to find data, metadata, and download. Nice map search too. #ESA2013\u00a0 #ignite #openscience 6 months agoReplyRetweetFavoriteSo let’s look at the three .csv files and see what they start and end with to see what order they were processed in, and identify the problem with the special characters (verify there is a problem). \u00a0I’ll open them in TextWrangler.Y4350 starts with matt and ends with kristina (row 2 – 11).<\/p>\n

Y4350 starts with Jacquelin and ends with Sandra. (row 12 – 21).<\/p>\n

The E’ special character appears to be an ellipsis. Unfortunately, URLs were not preserved with the scraping method (Twitter shortens URLS). For example, a short URL shared on Twitter normally would be “t.co” but even this short URL might be too long.<\/p>\n

Essentially this body of work provides access to text and sentiments, but will likely leave out URLs. \u00a0Another method will need to be devised to extract URLs.<\/p>\n

Another change I notice is “kristina simonaityt_” from\u00a0kristina simonaityt\u0117. I’m not sure if there is a workaround for preserving that, or if Google Drive preserved the special character in the first place.<\/p>\n

I’m also going back to sites.google.com to fine the URL with the original tweet from with the @Re… – That will be Y4350. That corresponds to URL 143 at <https:\/\/sites.google.com\/site\/mountainsol\/<\/a>><\/p>\n

“SHOULD” correspond. \u00a0It does not. I’m a bit confused. Row 22 contains the special character. \u00a0Row 22 will be in – ah. Year 4, offset 360. So, that should be URL 144 from\u00a0<https:\/\/sites.google.com\/site\/mountainsol\/<\/a>>.<\/p>\n

Ok, I changed the offset key to 360:<\/p>\n

http:\/\/topsy.com\/s?q=%40DataONEorg&type=tweet&sort=date&offset=360&mintime=1375358424&maxtime=1391515257<\/a><\/p>\n

Ah still not there – very confused. But using search<\/a> I located the tweet from row 21 in question:<\/p>\n

RT\u00a0<\/a>@DataONEorg<\/a>: Looking fwd to ignite tomorrow. 8am a bit early for rapid-fire presentation but a great line-up@NEON<\/a>\u00a0@NCEAS<\/a>\u00a0@cjlortie<\/a>\u00a0@re<\/a>\u2026<\/p>\n

It looks like “Re…” was something else in the original tweet (@recology_<\/a>)\u00a0from @DataONEorg, and either Topsy or Twitter filtered it once it exceeded the 140 characters allowed.<\/p>\n

Here’s the original tweet:<\/p>\n

Looking fwd to ignite tomorrow. 8am a bit early for rapid-fire presentation but a great line-up @NEON<\/a> @NCEAS<\/a> @cjlortie<\/a> @recology_<\/a> #ESA2013<\/a><\/p>\n

\u2014 DataONE (@DataONEorg) August 5, 2013<\/a><\/p><\/blockquote>\n

https:\/\/twitter.com\/DataONEorg\/status\/364497908143755264<\/a><\/p>\n

I am concerned that I can’t map the tweet back to the URL, but I don’t see that it’s a problem at the moment for what I’m doing, as long as the tweets are in order.<\/p>\n

If processed as I expect, they will be in reverse chronological order. \u00a0Y4 offset 350 are the newest tweets of year 4; Y4 offset 010 are the oldest tweets of year 4.<\/p>\n

At the moment, I have established that special characters are not handled well by the process I have outlined. Let me see how the ellipses are stored in the Excel files I obtained from Google Docs, starting with Y4. In the text \/ csv file, ellipses are represented as the character “\u00c9”. I’m looking at row 4 and Row 3 in Y4340 right now. Row 3 has “\u00ca” for some reason. Let’s look at the .xls file, row 3 and 4:<\/p>\n

Row 3:<\/p>\n

Matt Jones @metamattj @JacquelynGill #ESA2013 Agreed re: commenting, but #knb and other @DataONEorg\u00a0repos<\/span> are open and non-proprietary. 6 months agoReplyRetweetFavorite<\/p>\n

Row 4:<\/p>\n

Carly Strasser @carlystrasser AGU abstract deadline extended to TOMORROW 6p. Submit to our session on managing data! fallmeeting.agu.org\/2013\/scientifi\u2026<\/span>\u00a0 cc @_inundata @DataONEorg 6 months agoReplyRetweetFavorite1 more<\/p>\n

Here, it’s obvious that the ellipses are still preserved. \u00a0It’s likely I should have converted the columns to “text only” – I confirmed that these are “General” formatted columns.<\/p>\n

Let me see how long it takes to convert the columns from “general” to “text only” and see if that will make a difference for me.<\/p>\n

I converted Y4340 to text. \u00a0There is an ellipsis at row 4. Row 11 has a special character,\u00a0\u0117.<\/p>\n

Note: Keyboard shortcut for “format cells” is command + 1.<\/p>\n

Y4330 has ellipses at row 8.<\/p>\n

Missed one before 4210 will have to find out which one it is.<\/p>\n

For some reason I have a spreadsheet called “Y4170(2)” so that means there is a duplicate. Why?<\/p>\n

There is definitely a Y4140.xlsx but I don’t know if it is the same, and it’s not worth opening the other one so I’ll just process it as normal and check it later.<\/p>\n

I’m going to stop at Y4060, open up the tab-delimited text files I have saved so far and see if it is worth continuing (check for odd characters). Basically I’m wondering if the range function with Google Docs is not a better option for me. I have saved 31 so far and really don’t like doing it manually.<\/p>\n

Special characters are great and preserved (ellipses are preserved, non ASCII characters are preserved such as “Pau Arag\u00f3”, it’s just really annoying to do command 1, save as text, and then convert to tab-delimited file.<\/p>\n

I expect it’s faster to copy and paste out the unique IDs. I might open them in chrome with the Google Chrome extension. Worried about stopping mid-way with all these windows open. But, it appears I’ve processed Y4360 through Y4080, so I can just start again and exclude those if my computer crashes.<\/p>\n

I’m on my home computer. Don’t have the extension. \u00a0Reference previous lab notebook entry:\u00a0https:\/\/notebooks.dataone.org\/data-science\/scraping-dataoneorg-tweets-off-the-web-with-browser-extensions\/<\/a><\/p>\n

I mention two: link miner and Linkclumper. \u00a0Looks like I installed “Linkclump” on my CICS workstation. I’ll install that now on my home computer.<\/p>\n

https:\/\/chrome.google.com\/webstore\/detail\/linkclump\/<\/a><\/p>\n

The other option to explore is this: Is there something that will “fetch” all of the unique Google Spreadsheet IDs from one folder? \u00a0In this case, fetch all the document IDs from the DataONE-Topsy folder. The key here is to be systematic – which is why a program would be preferable.<\/p>\n

I did a google search for “fetch all IDs from Google Drive” without quotes.<\/p>\n

First two results interested me<\/p>\n

https:\/\/developers.google.com\/drive\/v2\/reference\/files\/get<\/a><\/p>\n

I don’t think this helps me, but I do have a “parameter name” that I am interested in:\u00a0fileId<\/p>\n

I searched the developers site for this string: “get fileId for all files in folder” without quotes. That’s pretty straightforward to me, but the results are a bit beyond my skill level. \u00a0However, I think this is pretty close to what I’m looking for:\u00a0http:\/\/stackoverflow.com\/questions\/21681441\/google-drive-file-id-and-folder-id-conventions<\/a>. The call is “files.list.”<\/p>\n

I think I’m going to download the desktop version of Google Drive to see if there is another way of working with that. Get it here:\u00a0https:\/\/tools.google.com\/dlpage\/drive\/index.html<\/a><\/p>\n

installgoogledrive.dmg in downloads folder. 25 megabytes. Plus 3 GB of content from Google Drive. \u00a0Honestly do not want that junk on my computer, that is why it is on Google Drive.<\/p>\n

So, I think the solution is to open these files in tabs, and very carefully and systematically copy and paste the unique document id.<\/p>\n

Probably will be faster, and LinkClump works, but it’s difficult to open all the files. because you can’t scroll. \u00a0Essentially it’s limited to what you can see in your screen. \u00a0I might wait until I can view them all on a larger screen (CICS workstation).<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n<\/td>\n<\/tr>\n<\/tbody>\n<\/table>\n","protected":false},"excerpt":{"rendered":"

I am continuing quality control efforts today. From looking at checksums for the files, some of the 147 appear to be the same. This concerns me due to the possibility of human error (my error) in creating the files, since I scraped tweets manually with a browser extension, rather than Continue reading Consolidating Year 1 – Year 4 @DataONEorg Tweets<\/span>→<\/span><\/a><\/p>\n","protected":false},"author":35,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[12],"tags":[243,246,242,244,192,235,245],"_links":{"self":[{"href":"https:\/\/notebooks.dataone.org\/wp-json\/wp\/v2\/posts\/1979"}],"collection":[{"href":"https:\/\/notebooks.dataone.org\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/notebooks.dataone.org\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/notebooks.dataone.org\/wp-json\/wp\/v2\/users\/35"}],"replies":[{"embeddable":true,"href":"https:\/\/notebooks.dataone.org\/wp-json\/wp\/v2\/comments?post=1979"}],"version-history":[{"count":5,"href":"https:\/\/notebooks.dataone.org\/wp-json\/wp\/v2\/posts\/1979\/revisions"}],"predecessor-version":[{"id":1985,"href":"https:\/\/notebooks.dataone.org\/wp-json\/wp\/v2\/posts\/1979\/revisions\/1985"}],"wp:attachment":[{"href":"https:\/\/notebooks.dataone.org\/wp-json\/wp\/v2\/media?parent=1979"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/notebooks.dataone.org\/wp-json\/wp\/v2\/categories?post=1979"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/notebooks.dataone.org\/wp-json\/wp\/v2\/tags?post=1979"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}