Software Archeology @ RIT

[ar·che·ol·o·gy] n. the study of people by way of their artifacts
Week 9 - Rails and PostgreSQL

27 Oct 2013

Last Week

Last week we had officially decided to split our modestly-sized research team into two teams- a scraper development team and an analysis tools team. Also, as briefly mentioned in last week’s post by Shannon, we had discussed using Ruby on Rails instead of using Ruby with ActiveRecord. I will discuss this more in the next section of this blog post.

This Week

There have been a number of major decisions this week concerning the resources we will use to move forward with the project. As mentioned previously, our team has finally decided to migrate from using Ruby with ActiveRecord to using Ruby on Rails (RoR). Because RoR is usually used for web-based applications, we will be somewhat perverting it’s standard use; as of right now we don’t intend to build it as a web application. If we were to use the web application component in the future though, we could easily add to our current project and use RoR to post our research questions and discoveries online.

Another major decision we’ve made is to switch from using MySQL to PostgreSQL for database management. Many of our group members have had a lot of issues with RoR and MySQL compatibility. Different version numbers of each cause problems with each other. It is due to this that we have switched to PostgreSQL.

This week has consisted of lots of modifications to the scraper, which we hope to be using soon. We have been working on adding Trollop to the scraper and formating the scraper so that if it is interrupted while the script is running, we can restart the script and it’ll pick up where if left off. This is extremely important in case of an unexpected interruption, and will save us a bunch of time and resources should an interruption occur. We also have been working on adding a separate cronjob to check if the script is running every half hour and notify Professor Meneely if the script goes down.

Next Week

Next week we plan on running the scraper and collecting our data, providing that all of the work on the scraper and cronjob have been completed. If not, we would run the scraper the week after.

« Home