MetaCPAN at the 2016 Perl QA Hackathon

Before I start, I'd like to thank our sponsors

This year I once again had the pleasure of attending my 4th Perl QA Hackathon. Before I get into any details, I'd like to thank the organizers: Neil Bowers, Barbie and JJ Allen. They did a fantastic job. It was a very well organized event and really a lot of fun. It was well worth attending and it made a huge difference to the MetaCPAN project.  Thanks also to Wendy for making sure that everyone had what they needed.

Lastly, I'd like to thank all of the sponsors who made this event possible. These companies and individuals understand what makes the QA Hackathon such an important event and I'm happy that they wanted to help make it happen.

The Crew

My focus this year (as always) was working on MetaCPAN, but this time around I had much more help than usual. Leo Lapworth joined us from London for the first two days, working on the sysadmin side. Mickey Nasriachi came in from Amsterdam to work with us on the back end code. Matt Trout spent a lot of his time helping us with debugging and database replication. Sawyer spent a great deal of his time pair programming with us and helping us debug some really thorny issues. Also, what began as a conversation with Joel Berger about a simple MetaCPAN patch resulted in him spending much of his time looking at various issues. He now has a solid understanding of the MetaCPAN stack and we hope he can continue to contribute as we move forward.

We had a really good crew and we were all quite focussed. We removed ourselves from the main hackathon room so that we were able to have our own conversations and be less subject to distracting conversations from other groups. Since we were just outside of the main room we were able to talk with various others as they passed by our table. It was like having a space to ourselves, but we still felt very much a part of the hackathon.

Our main goal was to upgrade MetaCPAN from Elasticsearch 0.20.2 to 2.3.0 I spent a lot of time on this with Clinton Gormley at last year's hackathon. The upgrade at that time was planned to be a 0.20.2 to a 1.x version. We were optimistic, but it became clear that it was a job that we couldn't realistically finish. So, we left last year's hackathon with some good changes, but we weren't close to being able to deploy them. By this year, Elasticsearch had introduced even more breaking changes as it moved from 1.x to 2.x, so we had to factor those in as well.

For 2016, in the weeks coming up to the hackathon, Leo and I had been pushing a lot of code in preparation for this weekend. Around the same time, Mickey arrived on the scene and really moved things forward with his code changes too. So, we had a small core of developers working on the code well in advance of the hackathon. That's actually one of the nice things about an event like this. I didn't just write code when I got here. Having a firm date by which a number of things had to be done forced me to sit down and solve various problems in the weeks leading up to hackathon.

What did we actually get done?

Elasticsearch Cluster

One criticism of MetaCPAN has been a lack of redundancy. We've had a good amount of hardware available to us for some time, but we haven't had a really good way to take advantage of it. Thanks to some of the work leading up to the hackathon, v1 of the API will run on an Elasticsearch cluster of 3 machines (rather than the 1 currently on production box, which is v0). Having a proper cluster at our disposal should make for faster searching and also greater redundancy if one of these machines needs to take an unscheduled break. On the human side, it will be a lot less stressful to lose one machine on a cluster of three than to lose one machine on a cluster of one. We all know these things happen. It's just a matter of time. So, we'll be better prepared for when a machine goes down.

Minion

Occasionally we need to re-index everything on CPAN. This takes a very long time. The current incarnation of MetaCPAN (v0) uses a script to do this and it can take 12 hours or more to run. If that script runs into some unhandled exception along the way, you have the rare pleasure of starting it up again manually. It needs some babysitting and it's far from bulletproof. It's also a bit hard to scale it.

Rather than trying to speed up our current system, we've added a Minion queue to our setup. This means that when we re-index CPAN, we add each upload as an item in our queue. We can then start workers on various boxes on the same network and we can run indexing in parallel. In our experiments we ran 17 workers each on 3 different boxes, giving us 51 workers in total. This gives us more speed and also more insight into which jobs have failed, how far along we are with indexing etc. It's a huge improvement for us.

Postgres

Minion has more than one possible back end. We've chosen to go with Postgres. This means that we now have Postgres installed for the first time and also available for other uses. Matt Trout has been working on Postgres replication for us so that we have some redundancy for our queues as well. Once that is available, he can also write a Pg schema which MetaCPAN can use as part of the back end. This means that at some future date we could begin to store our data in both Pg and Elasticsearch. This would give us a hybrid approach, allowing us to use Elasticsearch for the things it does well and a relational database for the kinds of queries which a NoSQL store doesn't handle well or at all in some cases.

As a historical footnote, the original version of the API first inserted into an SQLite database and then dumped that data into Elasticsearch. We may eventually come full circle and use a similar approach with Postgres.

RAM Disk

As part of Leo's sysadmin work, he has set up a RAM disk for the indexer to use when unpacking tarballs. Even if this only saves a fraction of a second per archive, when you're indexing 500,000 archives, even a small savings of time can be a win.

Elasticsearch Upgrade

Currently production runs on Elasticsearch version 0.20.2.  Our work this weekend has pushed us to using 2.3.0. Part of what has been holding us back is the many breaking changes which are involved in this particular upgrade. Much of our efforts at the hackathon were directed towards dealing with these various breaking changes. We haven't quite tackled all of them yet, but we're very close.

Deploying and Indexing a Beta Cluster

We now have a cluster of machines running our v1 beta.  I will publish the URLs as soon as we are ready for feedback.

Please note that our API versioning does not follow the Elasticsearch versioning. This frees us up to change API endpoints etc outside of the scope of another Elasticsearch upgrade.

CPAN River Integration

Joel Berger submitted a patch to integrate CPAN River statistics into the /distribution endpoint. The actual data will be provided by Neil Bowers. The patch to add this data to the /distribution endpoint has already been merged to the v1 branch and there has been some work done by Barbara to work on a front end display for the data.

CPANCover.com Integration

I had a chance to speak with Paul Johnson about cpancover.com I had initially put together an integration for his site 2 years ago at the QA Hackathon. I thought the integration was fine, but I ran into enough resistance from the MetaCPAN team that this pull request was never merged. We've now agreed on a way to move forward with this which will make everybody happy. There are open tickets on both the front and back end of MetaCPAN to address this.

Debian Packaging Information

Book is working on adding some information which can be used to correlate modules with their corresponding Debian packages. Once this is finished, this data can also be added to the distribution endpoint. The integration itself is pretty simple and will work much like the CPAN River.

Changes files

Graham Knopf wasn't able to attend the QA Hackathon, but he did spend some time hacking from home. He has a patch in to alter how changes files are displayed.

Moving Towards Test2::Harness

I spoke with Chad Granum on the evening before the hackathon and I mentioned that we were using Test::Aggregate, one of the few distributions which was not yet playing nicely with Test2. I wasn't too worried about this since we pin our dependencies via Carton but also because I'd been hoping to move away from it. I had been thinking about Test::Class::Moose as an alternative, but I didn't want to go to the trouble of setting up test runners etc. Something simpler would be nice. Chad showed me Test2::Harness, which would give us the same advantages of running under Test::Aggregate. It looks great and should be available shortly. In the meantime I've gutted the Test::Aggregate logic from the tests and we're running everything the old fashioned (slower) way for the time being. A switch to Test2::Harness in the near future should be trivial.

MetaCPAN::Moose

As part of our general cleanup, I released MetaCPAN::Moose. This is a simple bit of code which imports MooseX::StrictConstructor and namespace::autoclean into any class which uses it. After writing the code and the tests, I showed it to Sawyer. He sat down and immediately rewrote it using Import::Into. The code was now at least 50% smaller than it previously was and it was a lot cleaner. The tests continued to pass, so I was happy to release that to CPAN.

Moving forward we're going to publish a few more of our internal modules to CPAN. These will serve several purposes:

  • It will be useful to us as a way of sharing code between various apps which we have. We use Carton to manage various app installs, so sharing code can be tricky. We didn't want to go the submodule route unless we really have to.
  • Some people may also find this code useful. It's a good way to showcase our logic as a way of doing things (like setting up your own custom Moose type library). People could learn from it.
  • Alternatively, people might look at it and realize it's terrible. At this point they'll hopefully hack on it and send pull requests. Because this code is standalone with its own test suite, the overhead of getting started will be much, much less than it is for hacking on the rest of CPAN.

I don't think generally publishing internal logic to CPAN is a good idea, but for the above stated reasons, I think the code that we are talking about is well suited for this.

CPANTesters Data

We used to import CPAN Testers data into MetaCPAN using an SQLite database which they provided. At some point this database became unavailable. I'm encouraged to hear that this may not be a permanent state of affairs. If something can be worked out, the MetaCPAN can once again easily import testers data into its API using the database.

Somewhere out there I can hear someone complaining that this isn't RESTful or whatever, but for this amount of data involved, it's actually a good fit. I did discuss with Doug what a REST API for this might look like, but to be honest, that would potentially be much more work than just creating the database on some arbitrary schedule and publishing it.

Interesting Things I Learned From Random Conversations:

  • Matt Trout suggests abandoning MooseX::Types and moving our type checking to Type::Tiny. I'm on board with that, but it's not a priority right now.
  • I learned from Sawyer that a simple speed optimization is switching to a Perl which is compiled without taint. Also he recommended some XS modules for header and cookie handling. The XS part wasn't news to me, but it's something I'll keep in mind for future and certainly something I can make sure we do with MetaCPAN.

    Edit and caveat: As far as compiling Perl without taint mode goes, Sawyer was kind enough to refer me to some relevant p5p messages: http://nntp.perl.org/group/perl.perl5.porters/193822 http://nntp.perl.org/group/perl.perl5.porters/194361 Apparently there is some performance to be gained, but whether or not it's worthwhile for you likely depends very much on the behaviour of your application.

  • I heard (once again) that Devel::Confess is a "better" tool for debugging. I've been using it for a while now and am very happy with it. I'm not the only one.
  • From Mickey, I learned about Devel::QuickCover, which sounds like an interesting way to get a first pass at coverage data.
  • I now know how to pronounce Upasana.
  • I learned that I'm not the only person who has no clue how to read a flame graph.
  • After a lengthy conversation with Matt Trout on the Thursday it wasn't until I said, "hang on, I'll send you the link on IRC" that he looked at his screen and then looked back up and said, "oh, that's who you are". I guess I could have introduced myself formally when he first sat down, but eventually we got there.
  • After seeing the Roost laptop stand in action, I think I need one.

Unrelated to MetaCPAN

Karen Etheridge was able to merge my fix to allow MooseX::Getopt to play nicely with init_arg. It's a bug that has bitten me on more than one occasion. The fix has now been released.

After a conversation with BINGOS on Sort::Naturally, he got me co-maint on that module so that I can look at addressing an outstanding issue.

In Conclusion

For me, it was a great few days for moving the project along and socially quite fun. I got to see a bit of London on my arrival and spend a few hours at the British Museum, which I last visited about 20 years ago. In the afternoon, Leo was kind enough to drive me up to Rugby. Leo, Mickey and Joel were among the people whom I have spoken with on IRC but had never met in person. Making those real life connections is great.

On a practical level, I mostly started looking the correct way when crossing the street, but I wouldn't bet anyone else's safety on my ability to do the right thing there. Most of my ride from the airport to Leo's office consisted of me feeling quite sick to my stomach as part of me really wanted the driver to switch to the correct right side of the road. London rush hour traffic and narrow streets with two way traffic probably didn't help.

It was nice to see RJBS get a special show of thanks for his years as pumpking and also to witness the passing of the torch to Sawyer, who will do a fantastic job as he takes over. Also the tradition of publicly thanking the organizers has continued, which is a nice part of the weekend.

I should mention that this year there were no special outings. No video game museum tours, no chance to see how Chartreuse is made. Not even a trip to the set of Downton Abbey. That meant a few extra hours of hacking, bug squashing etc, which is nice too. I'm sure that deep down inside Neil really wanted to take us to a filming of Coronation Street, but he resisted the urge in order to further the goal of productivity.

All in all, I felt it was an extremely productive week for me and for MetaCPAN in general. My sincere thanks go out to the gang for having had me along once again this year.

How I Spent my 2015 NY.pm Hackathon

On May 2, 2015 I had the pleasure of attending the NY.pm hackathon, which was hosted at the Bloomberg tower in Manhattan. I was privileged to be one of 5 developers to have their travel and hotel sponsored by Bloomberg L.P. This made attending the event very easy for me. Basically all I had to do was show up at the airport and the rest was taken care of for me!

The event was very well organized, had a great vibe and was very encouraging to newcomers (to Perl and to open source contributions). For my part, I was there to work on MetaCPAN and (hopefully) be there as a resource to anyone else who wanted to contribute to MetaCPAN.

I'm happy to say that I got a number of things done. I was able to fix all of the failing tests on ElasticSearchX::Model. This is a module which MetaCPAN relies on heavily. Going into it, I wasn't sure if the failures were in the code or in the tests. Luckily it was just a problem with the tests, so that was easy enough to fix. I trapped some warnings while I was at it and eventually got a green light from Travis. I got a good chunk of this done on the flight in, so I was able to finish it and release a new version as my first order of business at the hackathon.

Moving forward I continued to work on the MetaCPAN Elasticsearch upgrade, which I was working on at the QA Hackathon. I was able to fix bugs in the module which imports CPAN mirror data into the little known mirror endpoint of the API. I also (mostly) fixed bugs in the module which imports CPANTesters data into the release objects of the API. That still needs some work, but it took a fair amount of digging around.

In addition to this, I worked with MATTP, who added more handy keyboard shortcuts to MetaCPAN. (For example, go to https://metacpan.org/pod/Plack and type "pr" -- that will take you straight to the Github pull requests for this repository). I was able to merge and deploy this change at the hackathon.

I also had some good conversations with RJBS about finding recursive dependencies for modules and graphing them. It turns out he already has a workable solution for this and I don't think converting his code to use MetaCPAN would actually speed things up for him.

I finally met Yanick Champoux, who was a very early contributor to MetaCPAN. I was able to recognize him from the 1/2 of his face which is exposed by his avatar! I should also mention that he helped me find my phone not once, but twice in 24 hours. (I really have to keep better track of it).

I also had a fun dinner with Florian Ragwitz and Augustina Ragwitz. (Florian has been involved with MetaCPAN since it was about six months old).

And, to round out the namedropping, I also met the following folks for the first time: I had an interesting chat with David Farrell about perltricks.com and using Perl6 to parse Pod. Charlie Gonzalez showed me all of the interesting stuff a Fitbit can track and I had a very brief chats with Nick Patch and Peter Martini, whom I basically crossed paths with as I was headed for my ride to the airport.

The facilities were outstanding as was the plentiful food (breakfast and lunch). This was all made possible by the sponsors: Bloomberg, RubensteinTech and PerlTricks.com.

The organizers did a fantastic job with all of this, so I should particularly thank Jim Keenan, Charlie Gonzalez and David Golden and Kevin P. Fleming.

This was the 2nd NY.pm hackathon. I have a hunch that this means there will also be a 3rd. If you have a chance to attend this hackathon in future, my advice would be do it!

I’ll be at the 2015 New York Perl Hackathon

I'm happy to say that I'll be participating in the 2015 New York Perl Hackathon. I'd like to thank Bloomberg, L.P. for sponsoring me so that I can attend this event.

While I'm at the hackathon, I hope to continue my work on MetaCPAN as I did at the QA Hackathon one week ago. I've put together a list of possible MetaCPAN projects. If anyone would like to take on any of these projects, feel free to get in touch with me in advance if you have any questions on what might be involved with any of these proposals.

I'll also be available to help out with things which aren't MetaCPAN-related: Perl, Git, GitHub, etc. There's more general information at the hackathon wiki.

I will, of course, report back on my progress at the hackathon after the event has taken place. I'm looking forward to a productive day of hacking with a group of smart, motivated people.

How I Spent My 2015 Perl QA Hackathon

One week ago I was in Berlin at the Perl Quality Assurance Hackathon (QAH), happily hacking away on MetaCPAN. Today I'll summarize the good, the bad and the ugly about my time in Berlin. Spoiler alert: it was all good.

This was the third year that I've been able to attend the QAH. I was previously in Paris and Lyon. From my past years, I knew that I'd have some serious time to put my head down and get some work done. I didn't make an overly ambitious TODO list, since there's one main project for MetaCPAN right now: upgrading from Elasticsearch 0.20.2 to 1.5.0 It's a big jump with a number of breaking changes. MetaCPAN has a pretty big stack and a lot of lines of code. It also relies on ElasticSearchX::Model as an abstraction, which needed some work for this upgrade as well. So, I figured I'd put some effort into this and work on a few other things as they came up.

My hackathon always begins at YYZ [insert joke about Rush] and continues on the flight in. I generally get a lot of work done in this phase rather than getting sucked into just watching bad movies. (Don't get me wrong, I truly enjoy a bad movie, but I also rarely get a big block of time to move things forward with my open source projects).

On the flight I decided to get some smaller things out of the way, mostly not directly related to MetaCPAN. I wasn't able to release anything from the air, so I'll summarize those as part of my hackathon Day 1.

In addition to the code I wrote on the plane, I wrote a couple of blog posts to thank new MetaCPAN sponsors. I have 3 outstanding posts and I was able to write 2 of them on the plane. You'll see those shortly. I'll post them individually once all of the QAH blog posting dies down.

 

The Day Before

 

I flew out on Tuesday and arrived early on Wednesday morning.  I was at the hotel by 9 AM, but the room wasn't available until 3 PM, so I ditched by bags and headed out on the town.  I had decided to fly only with enough clothing to get me through day one, so one of my first tasks was to find something to wear.  That was actually a lot of fun.  One of my favourite moments was when I told a salesperson that I was looking for a medium sized shirt.  Without skipping a beat he said, "you'll need a large."  He was right.  I'm sure there was no judgement implied!

In the evening I got to meet (for the first time) a couple of folks whom I've worked with on MetaCPAN since the very beginning.  Johannes Plunien, the creator of GitHub Meets CPAN happens to live in Berlin.  He hadn't been planning to attend the hackathon, but was free to hang out in the evening, so he came by the hotel.  Next Clinton Gormley, one of the authors of  Elasticsearch: The Definitive Guide came by the hotel as well.  He was in town for the hackathon, having been sponsored by Elastic to attend.  I knew he was tall, but I wasn't quite prepared for how he would tower over me.  It's a good thing he's a friendly giant!  Next Neil Bowers showed up and we all joined the others for dinner.  For me this is a good illustration of how hackathons are helpful beyond just getting things done.  It was really great for me to get to know Johannes and Clint in person.  It's also nice to have Neil around, since he speaks the Queen's English.

 

Day One (Thursday)

 

  • On the airplane I had written some code for Plack which allows it to use Cookie::Baker in Plack::Request. I had actually proposed to send this pull request back in November 2014 and it had kind of been hanging over my head that I hadn't gotten to it. MIYAGAWA merged it on Day 1. It shrinks Plack::Request by 34 lines.
  • I merged a MetaCPAN pull request which adds boilerplate installation instructions for modules to the left sidebar of module and distribution pages. (You'll find it under "PERMALINKS"). Once this was merged we found some issues related to the changes and after a few iterations I had those cleaned up and deployed to production.
  • I added JS beautification to MetaCPAN. We already tidy our Perl code automatically. This keeps our JS looking spiffy as well.
  • On the plane I basically finished up a proof of concept I had been working on in order to showcase HTTP::BrowserDetect. In the evening back at the hotel I registered a domain name and launched BrowserDetect.org I had been toying with making this a web service so that you could use it for robot detection etc, but I opted for a Minimum Viable Product to start with here.
  • I merged a bug fix for WWW::Mechanize::Cached and released a new version.
  • I realized that I had some unreleased changes in WWW::RoboCop, so I released a new version of that and also fixed a test dependency and then re-released.
  • On the airplane I had made two big improvements to LWP::ConsoleLogger. First, I had converted it from Moose to Moo, which means that I now have the option of adding it as a proper dependency to GitHub::MergeVelocity without pulling in all of Moose. I also began using HTTP::Body to parse POST params. (What I had been doing before to parse POST params did work, but it made me sad.) I released a new LWP::ConsoleLogger with these improvements.
  • In addition to all of this, I worked with Clinton Gormley on the Elasticsearch upgrade.  That is, he did the bulk of that work and I helped him with issues specific to MetaCPAN.

 

Day Two (Friday)

 

  • I merged my own pull request which had begun on the airplane, where I had written some code to remove MetaCPAN's Pod generation out of the Catalyst controller.  This makes it easier to test.  Also, this will allow us in future to accept an arbitrary base URL for Pod generation via the API. This means that if you are generating Pod but don't want it to link back to MetaCPAN, you'll be able to provide your own URL for the Pod generation.  This also something we'll be able to use on the MetaCPAN search site when developing locally.  Right now Pod links bounce you from your development site to the production site, which is really confusing.  This part is not yet implemented, but it should be fairly trivial to implement.
  • I continued to work with Clinton Gormley on his Elasticsearch improvements
  • I fixed an issue with permissions on the MetaCPAN production machines.  This allows us to run many commands without having to su to another user first, which is quite helpful.
  • I released a new ElasticSearchX::Model  This included some of Clinton's improvements as well as a few changes which were required to keep up with changes in Dist::Zilla plugins.
  • I spent some time improving the tests for the MetaCPAN API
  • I patched MetaCPAN.org to recognize more extensions which are commonly used for markdown.  This will make more README files easier to find.
  • I also spent some time working with CPAN::Faker to add some tests for a new endpoint which Clinton was creating after a conversation with MIYAGAWA.  This will be used by cpanm and will simplify its internals once it is finished.  Essentially cpanm builds a complex API query to find the correct download URL for modules under various conditions.  We've decided to move this logic directly into the MetaCPAN API.  This will remove a fair chunk of logic from cpanm and will also make this new endpoint available to anyone else who wishes to use it.  That's a big win.

 

Day Three (Saturday)

 

  • I fixed a bug in ElasticSearchX::Model which was particularly hard to debug.  It was throwing an exception in a DEMOLISH sub.  The problem with throwing exception at object teardown is that you have no guarantee of which classes are still available.  As a result, Moose was trying to inflate an exception to a class which was no longer available.  That, in turn, triggers a new exception which triggers a new exception which ... (well, you can see where I'm going from here).  Basically hilarity ensues.  I was happy to get that sorted and released.
  • Based on my work with CPAN::Faker on Day two, I came to the conclusion that it wasn't going to be a solution for all of our problems.  Possibly it could be, but not without a lot of effort, since it wants you to mock up CPAN distributions.  I had been spending time running the MetaCPAN indexer on a staging machine using the latest Elasticsearch and I was watching real tarballs trigger exceptions in our code.  I didn't want to have to find the problems with those tarballs and then mock them up using CPAN::Faker, mostly because I'm lazy.  (This is not strictly a virtue of programming.  I'm pretty sure I was born lazy.)  So, I reworked the API tests to create a CPAN of arbitrary size using OrePAN2.  This now allows us to add any problematic tarballs to our test suite (but not to our Git repository).  This will make regression testing much easier.  It has already made my debugging incredibly easier.
  • This was also Clinton's last day at the hackathon.  He did an incredible amount of work to move the upgrade forward and he also very graciously gave a presentation on Elasticsearch's query language which makes it much easier to understand.  I would list off all of Clint's accomplishments but that would make this blog post ridiculously long, if it isn't already.

 

Day Four (Sunday)

 

  • As part of my work on the API's test suite, I ended up releasing Git::Helpers.  The name is misleading since there is currently only one helper (oops!) but I actually do plan to add more.
  • The MetaCPAN API was getting pummelled over the time I was there.  I did some troubleshooting to find which IPs and UserAgents were occurring most in our log files.  I put those in a newly created metacpan-sysadmin repository.
  • As part of my work to add an arbitrary CPAN to the API tests, I came across a bug in CPAN::Repository.  I had wanted something to write an 06perms file and had casually mentioned this to Neil Bowers.  He had suggested this module.  I raised a ticket about the issue and GETTY immediately gave me co-maint and added me as a collaborator on the GitHub repository.  I've now patched this particular bug and have released a new version of the module.
  • After having dinner with PLU earlier in the week, we had convinced him to visit the hackathon on the weekend.  He graciously came by on the Sunday and worked like a machine.  By the time he had left, he had sent 4 pull requests for metacpan.org  The most notable of these pull requests is that we now have keyboard shortcuts, which I have wanted for a very long time.  Type "?" on a module page, for instance, to see what you can now do.  The shortcuts will be revised again shortly, so don't get too attached to all of them, but this is going to make your MetaCPAN browsing much, much easier.
  • I was able to merge and deploy all of PLU's commits on the Sunday

 

The Day After (Monday)

 

  • I got to the airport way too early, so I had a couple of hours to keep working.  I continued to refactor the API tests.  I also worked a bit on the download_url tests which Clinton Gormley had created for the new cpanm endpoint.  As part of my work I added some more documentation to the API as well.
  • I also looked into an issue with OrePAN2, which makes it hard to inject developer releases into your local DarkPAN.  This actually took up a bunch of my time as I had to poke around in Parse::PMFile as well as Parse::LocalDistribution.  ISHIGAKI has kindly worked through some of these issues with me.  As a result, I think we have a good solution and that should be implemented and released short.
  • I continued to poke at the test suite and various little things on the plane until I was exhausted enough that all I could reasonably do was watch a bad movie.

 

What I Didn't Do

 

We didn't manage to finish the Elasticsearch migration, but that's OK.  Collectively there was a huge amount of work on this which got finished.  It was probably not realistic to think this would have been done over 4 days, especially given that MetaCPAN is a service which is heavily used in production.  Now that we're out of beta, we can't just push stuff out to production and hope that it doesn't break too badly.

If you've read other hackathon reports, you'll have read about all of the discussions around CPAN which took place.  I made a conscious decision not to participate in these discussions because a) I needed the time to work on MetaCPAN and b) there were other people who have thought much more about these things than I have.  I trust them to make the right decisions and from what I understand, they made a lot of progress.

Socially, there are a lot of people at the hackathon that I didn't really connect with.  I had some good conversations with various people, but I did spend a lot of time with headphones on trying to get this or that sorted.  That was a conscious decision as well.  I tried to be available to anyone who had MetaCPAN questions but I also had to take advantage of this opportunity since I won't have another 4 day block to work on MetaCPAN at any point in the near future.

 

Thank You (The Credits)

 

Basically, I just had to show up and do my thing.  That wouldn't have happened without many, many other people who did their part to make this happen.  First off, I should thank MaxMind for allowing me to use my annual training time to attend the hackathon.  It's much easier to attend an event like this if you don't have to take vacation to do it.  Basically it was a choice between attending the QAH or attending YAPC::NA. So, I won't be at YAPC this year, but I think it's a worthwhile trade-off.

I should thank my family for letting me swan off to Europe to pursue my hobby and just generally enjoy myself.  (I keep trying to say "this is actually hard work", but nobody at my house is buying it.)

I should say that Tina Müller did a fantastic job of organizing the hackathon, supported by Andreas König. Also, Neil Bowers worked his magic to conjure up sponsors for the event. Whenever I saw Wendy van Dijk, she was either hauling food from a shopping trip, washing and preparing food and making trips up and down the stairs to do this. When there's no elevator available, this is no easy task. 🙂

I'll close with a proper thank you to our sponsors!