Announcing meta::hack

Every so often, someone asks if they can donate money to MetaCPAN. I usually direct them to CPAN Testers, since (due to our generous hosting sponsors) we've generally not had a need for money. You can probably see where I'm going with this. Times have changed. We're no longer turning financial sponsors away.

Back at the QA Hackathon in Rugby, we had a great group of hackers together and we got a lot of work done. However, as we worked together, it became clear that the size of our job meant that we wouldn't be able to finish everything we had set out to do over that four day period. There are times when there's no replacement for getting everyone in the same room together.


The first dedicated MetaCPAN hackathon will be held at the offices of ServerCentral
in Chicago, from November 17th through 20th. The primary goal for this hackathon is to complete MetaCPAN’s transition to Elasticsearch version 2. This will enable the live service to run on a cluster of machines, greatly improving reliability and performance. The hackathon will also give the core team a chance to plan work for the coming 18 months.

The meta::hack event is a hackathon where we're bringing together key developers to work on the MetaCPAN search engine and API. This will give core team members time to work together to complete the transition to Elasticsearch version 2, and time to discuss gnarly issues and plan the roadmap beyond the v1 upgrade.

MetaCPAN is now one of the key tools in a Perl developer's toolbox, so supporting this event is a great way to support the Perl community and raise your company's profile at the same time. This hackathon is by invitation only. It’s a core group of MetaCPAN hackers. We are keeping the group small in order to maintain focus on the v1 API and maximize the productivity of the group.

Why sponsor the MetaCPAN Hackathon?


• If your company uses Perl in any way, then your developers almost certainly use MetaCPAN to find CPAN modules, and they probably use other tools that are built on the MetaCPAN API.
• The MetaCPAN upgrade will improve the search engine and the API for all Perl developers. As a critical tool, we need it to be always available, and fast. This upgrade is a key step in that direction.
• This is a good way to establish your company as a friend of Perl, for example if you're hiring.



There will be 8 people taking part, including me. Everyone taking part is an experienced senior-level software engineer, and most of them have already spent a lot of time working on MetaCPAN. As noted above, this is an invitational event with a very specific focus.

What is meta::hack?


MetaCPAN was created in late 2010. Version 0 of the MetaCPAN API was built on a very early version of Elasticsearch. For the first 5 years, most of the work on MetaCPAN focussed on improving the data coverage, and the web interface. In that time Elasticsearch has moved on, and we're now well behind.

The work to upgrade Elasticsearch began in May of 2014. It continued in early Feb of 2015. Later, at the 2015 QA Hackathon in Berlin, Clinton Gormley (who works for Elastic) and I worked on moving MetaCPAN to Elasticsearch version 2. This work was continued at the 2016 QA Hackathon in Rugby, and as a result we now have a beta version in live usage.

The primary goal of meta::hack is to complete the port to Elasticsearch version 2, so the public API and search engine can be switched over. There are a number of benefits:

• Switching from a single server to a cluster of 3 servers, giving a more reliable service and improved performance.
• Once we decommission the old service, we’ll be able to set up a second cluster of 3 machines in a second data centre, for further improvements.
• We’ll be able to take advantage of new Elasticsearch features, like search suggesters.
• We’ll be able to use a new endpoint that has been developed specifically to speed up cpanminus lookups. Cpanminus is probably the most widely used CPAN client these days, so improving this will benefit a large percentage of the community.
• If and when is decommissioned, we’ll be able to handle the extra traffic that will bring with it, and we’ll also have the redundancy to do this safely.
• We’ll be able to shift focus back to bug fixes and new MetaCPAN features.

Becoming a Sponsor


Neil Bowers has kindly taken on the task of shepherding the sponsorship process.  (He also wrote the sponsorship prospectus from which I cribbed most of this post.) Please contact Neil or contact me for a copy of the meta::hack sponsorship prospectus.  It contains most of the information listed above as well as the various available sponsorship levels which are available.  Thank you for your help in making this event happen.  We're looking forward to getting the key people together in one room again and making this already useful tool even better.

Don’t Forget about URI::Heuristic

Imagine you've got some user input that is supposed to be a valid URL, but it's user input, so you can't be sure of anything. It's not very consistent data, so you at least make sure to prepend a default scheme to it. It's a fairly common case. Sometimes I see it solved this way:

This converts to, but it can be error prone. For instance, what if I forgot to make the regex case insensitive? Actually, I've already made a mistake. Did you spot it? In my haste I've neglected to deal with https URLs. Not good. URI::Heuristic can help here.

This does exactly the same thing as the example above, but I've left the logic of checking for an existing scheme to the URI::Heuristic module. If you like this approach, but you'd rather get a URI object back then try this:


Are we sure this is what we want? Checking the scheme is helpful and even if we weren't using this module, we'd probably want to do this anyway.

That's it! This module has been around for almost 18 years now, but it still solves some of today's problems.

How to Get a CPAN Module Download URL

Every so often you find yourself requiring the download URL for a CPAN module. You can use the MetaCPAN API to do this quite easily, but depending on your use case, you may not be able to do this in a single query. Well, that's actually not entirely true. Now that we have v1 of the MetaCPAN API deployed, you can test out the shiny new (experimental) download_url endpoint. This was an endpoint added by Clinton Gormley at the QA Hackathon in Berlin. Its primary purpose is to make it easy for an app like cpanm to figure out which archive to download when a module needs to be installed. MetaCPAN::Client doesn't support this new endpoint yet, but if you want to take advantage of it, it's pretty easy.

Now invoke your script:

olaf$ perl Plack


After I originally wrote this post, MICKEY stepped up and actually added the functionality to MetaCPAN::Client. A huge thank you to him for doing this. 🙂 Let's try this again:

That cuts the lines of code almost in half and is less error prone than crafting the query ourselves. I'd encourage you to use MetaCPAN::Client unless you have a compelling reason not to.


This endpoint is experimental.  It might not do what you want in all cases.  See this GitHub issue for reference.  Please add to this issue if you find more cases which need to be addressed.  Having said that, this endpoint should do the right thing for most cases.  Feel free to play with it to see if it suits your needs.

Easy Perl OAuth Integration with Runkeeper and Spotify

I've been tooling around with a fun little app that I'm building on evenings and weekends. As part of that work I figured I'd let users authenticate via Runkeeper. Luckily Runkeeper uses OAuth2 and it's all pretty easy to get going with. I've published my very small contribution as Mojolicious::Plugin::Web::Auth::Site::Runkeeper

On a similar note, earlier this year I also released Mojolicious::Plugin::Web::Auth::Site::Spotify

If you're already using Mojolicious::Plugin::Web::Auth, then these modules will make it trivial for you connect with the Runkeeper and/or Spotify web services.

MetaCPAN at the 2016 Perl QA Hackathon

Before I start, I'd like to thank our sponsors

This year I once again had the pleasure of attending my 4th Perl QA Hackathon. Before I get into any details, I'd like to thank the organizers: Neil Bowers, Barbie and JJ Allen. They did a fantastic job. It was a very well organized event and really a lot of fun. It was well worth attending and it made a huge difference to the MetaCPAN project.  Thanks also to Wendy for making sure that everyone had what they needed.

Lastly, I'd like to thank all of the sponsors who made this event possible. These companies and individuals understand what makes the QA Hackathon such an important event and I'm happy that they wanted to help make it happen.

The Crew

My focus this year (as always) was working on MetaCPAN, but this time around I had much more help than usual. Leo Lapworth joined us from London for the first two days, working on the sysadmin side. Mickey Nasriachi came in from Amsterdam to work with us on the back end code. Matt Trout spent a lot of his time helping us with debugging and database replication. Sawyer spent a great deal of his time pair programming with us and helping us debug some really thorny issues. Also, what began as a conversation with Joel Berger about a simple MetaCPAN patch resulted in him spending much of his time looking at various issues. He now has a solid understanding of the MetaCPAN stack and we hope he can continue to contribute as we move forward.

We had a really good crew and we were all quite focussed. We removed ourselves from the main hackathon room so that we were able to have our own conversations and be less subject to distracting conversations from other groups. Since we were just outside of the main room we were able to talk with various others as they passed by our table. It was like having a space to ourselves, but we still felt very much a part of the hackathon.

Our main goal was to upgrade MetaCPAN from Elasticsearch 0.20.2 to 2.3.0 I spent a lot of time on this with Clinton Gormley at last year's hackathon. The upgrade at that time was planned to be a 0.20.2 to a 1.x version. We were optimistic, but it became clear that it was a job that we couldn't realistically finish. So, we left last year's hackathon with some good changes, but we weren't close to being able to deploy them. By this year, Elasticsearch had introduced even more breaking changes as it moved from 1.x to 2.x, so we had to factor those in as well.

For 2016, in the weeks coming up to the hackathon, Leo and I had been pushing a lot of code in preparation for this weekend. Around the same time, Mickey arrived on the scene and really moved things forward with his code changes too. So, we had a small core of developers working on the code well in advance of the hackathon. That's actually one of the nice things about an event like this. I didn't just write code when I got here. Having a firm date by which a number of things had to be done forced me to sit down and solve various problems in the weeks leading up to hackathon.

What did we actually get done?

Elasticsearch Cluster

One criticism of MetaCPAN has been a lack of redundancy. We've had a good amount of hardware available to us for some time, but we haven't had a really good way to take advantage of it. Thanks to some of the work leading up to the hackathon, v1 of the API will run on an Elasticsearch cluster of 3 machines (rather than the 1 currently on production box, which is v0). Having a proper cluster at our disposal should make for faster searching and also greater redundancy if one of these machines needs to take an unscheduled break. On the human side, it will be a lot less stressful to lose one machine on a cluster of three than to lose one machine on a cluster of one. We all know these things happen. It's just a matter of time. So, we'll be better prepared for when a machine goes down.


Occasionally we need to re-index everything on CPAN. This takes a very long time. The current incarnation of MetaCPAN (v0) uses a script to do this and it can take 12 hours or more to run. If that script runs into some unhandled exception along the way, you have the rare pleasure of starting it up again manually. It needs some babysitting and it's far from bulletproof. It's also a bit hard to scale it.

Rather than trying to speed up our current system, we've added a Minion queue to our setup. This means that when we re-index CPAN, we add each upload as an item in our queue. We can then start workers on various boxes on the same network and we can run indexing in parallel. In our experiments we ran 17 workers each on 3 different boxes, giving us 51 workers in total. This gives us more speed and also more insight into which jobs have failed, how far along we are with indexing etc. It's a huge improvement for us.


Minion has more than one possible back end. We've chosen to go with Postgres. This means that we now have Postgres installed for the first time and also available for other uses. Matt Trout has been working on Postgres replication for us so that we have some redundancy for our queues as well. Once that is available, he can also write a Pg schema which MetaCPAN can use as part of the back end. This means that at some future date we could begin to store our data in both Pg and Elasticsearch. This would give us a hybrid approach, allowing us to use Elasticsearch for the things it does well and a relational database for the kinds of queries which a NoSQL store doesn't handle well or at all in some cases.

As a historical footnote, the original version of the API first inserted into an SQLite database and then dumped that data into Elasticsearch. We may eventually come full circle and use a similar approach with Postgres.

RAM Disk

As part of Leo's sysadmin work, he has set up a RAM disk for the indexer to use when unpacking tarballs. Even if this only saves a fraction of a second per archive, when you're indexing 500,000 archives, even a small savings of time can be a win.

Elasticsearch Upgrade

Currently production runs on Elasticsearch version 0.20.2.  Our work this weekend has pushed us to using 2.3.0. Part of what has been holding us back is the many breaking changes which are involved in this particular upgrade. Much of our efforts at the hackathon were directed towards dealing with these various breaking changes. We haven't quite tackled all of them yet, but we're very close.

Deploying and Indexing a Beta Cluster

We now have a cluster of machines running our v1 beta.  I will publish the URLs as soon as we are ready for feedback.

Please note that our API versioning does not follow the Elasticsearch versioning. This frees us up to change API endpoints etc outside of the scope of another Elasticsearch upgrade.

CPAN River Integration

Joel Berger submitted a patch to integrate CPAN River statistics into the /distribution endpoint. The actual data will be provided by Neil Bowers. The patch to add this data to the /distribution endpoint has already been merged to the v1 branch and there has been some work done by Barbara to work on a front end display for the data. Integration

I had a chance to speak with Paul Johnson about I had initially put together an integration for his site 2 years ago at the QA Hackathon. I thought the integration was fine, but I ran into enough resistance from the MetaCPAN team that this pull request was never merged. We've now agreed on a way to move forward with this which will make everybody happy. There are open tickets on both the front and back end of MetaCPAN to address this.

Debian Packaging Information

Book is working on adding some information which can be used to correlate modules with their corresponding Debian packages. Once this is finished, this data can also be added to the distribution endpoint. The integration itself is pretty simple and will work much like the CPAN River.

Changes files

Graham Knopf wasn't able to attend the QA Hackathon, but he did spend some time hacking from home. He has a patch in to alter how changes files are displayed.

Moving Towards Test2::Harness

I spoke with Chad Granum on the evening before the hackathon and I mentioned that we were using Test::Aggregate, one of the few distributions which was not yet playing nicely with Test2. I wasn't too worried about this since we pin our dependencies via Carton but also because I'd been hoping to move away from it. I had been thinking about Test::Class::Moose as an alternative, but I didn't want to go to the trouble of setting up test runners etc. Something simpler would be nice. Chad showed me Test2::Harness, which would give us the same advantages of running under Test::Aggregate. It looks great and should be available shortly. In the meantime I've gutted the Test::Aggregate logic from the tests and we're running everything the old fashioned (slower) way for the time being. A switch to Test2::Harness in the near future should be trivial.


As part of our general cleanup, I released MetaCPAN::Moose. This is a simple bit of code which imports MooseX::StrictConstructor and namespace::autoclean into any class which uses it. After writing the code and the tests, I showed it to Sawyer. He sat down and immediately rewrote it using Import::Into. The code was now at least 50% smaller than it previously was and it was a lot cleaner. The tests continued to pass, so I was happy to release that to CPAN.

Moving forward we're going to publish a few more of our internal modules to CPAN. These will serve several purposes:

  • It will be useful to us as a way of sharing code between various apps which we have. We use Carton to manage various app installs, so sharing code can be tricky. We didn't want to go the submodule route unless we really have to.
  • Some people may also find this code useful. It's a good way to showcase our logic as a way of doing things (like setting up your own custom Moose type library). People could learn from it.
  • Alternatively, people might look at it and realize it's terrible. At this point they'll hopefully hack on it and send pull requests. Because this code is standalone with its own test suite, the overhead of getting started will be much, much less than it is for hacking on the rest of CPAN.

I don't think generally publishing internal logic to CPAN is a good idea, but for the above stated reasons, I think the code that we are talking about is well suited for this.

CPANTesters Data

We used to import CPAN Testers data into MetaCPAN using an SQLite database which they provided. At some point this database became unavailable. I'm encouraged to hear that this may not be a permanent state of affairs. If something can be worked out, the MetaCPAN can once again easily import testers data into its API using the database.

Somewhere out there I can hear someone complaining that this isn't RESTful or whatever, but for this amount of data involved, it's actually a good fit. I did discuss with Doug what a REST API for this might look like, but to be honest, that would potentially be much more work than just creating the database on some arbitrary schedule and publishing it.

Interesting Things I Learned From Random Conversations:

  • Matt Trout suggests abandoning MooseX::Types and moving our type checking to Type::Tiny. I'm on board with that, but it's not a priority right now.
  • I learned from Sawyer that a simple speed optimization is switching to a Perl which is compiled without taint. Also he recommended some XS modules for header and cookie handling. The XS part wasn't news to me, but it's something I'll keep in mind for future and certainly something I can make sure we do with MetaCPAN.

    Edit and caveat: As far as compiling Perl without taint mode goes, Sawyer was kind enough to refer me to some relevant p5p messages: Apparently there is some performance to be gained, but whether or not it's worthwhile for you likely depends very much on the behaviour of your application.

  • I heard (once again) that Devel::Confess is a "better" tool for debugging. I've been using it for a while now and am very happy with it. I'm not the only one.
  • From Mickey, I learned about Devel::QuickCover, which sounds like an interesting way to get a first pass at coverage data.
  • I now know how to pronounce Upasana.
  • I learned that I'm not the only person who has no clue how to read a flame graph.
  • After a lengthy conversation with Matt Trout on the Thursday it wasn't until I said, "hang on, I'll send you the link on IRC" that he looked at his screen and then looked back up and said, "oh, that's who you are". I guess I could have introduced myself formally when he first sat down, but eventually we got there.
  • After seeing the Roost laptop stand in action, I think I need one.

Unrelated to MetaCPAN

Karen Etheridge was able to merge my fix to allow MooseX::Getopt to play nicely with init_arg. It's a bug that has bitten me on more than one occasion. The fix has now been released.

After a conversation with BINGOS on Sort::Naturally, he got me co-maint on that module so that I can look at addressing an outstanding issue.

In Conclusion

For me, it was a great few days for moving the project along and socially quite fun. I got to see a bit of London on my arrival and spend a few hours at the British Museum, which I last visited about 20 years ago. In the afternoon, Leo was kind enough to drive me up to Rugby. Leo, Mickey and Joel were among the people whom I have spoken with on IRC but had never met in person. Making those real life connections is great.

On a practical level, I mostly started looking the correct way when crossing the street, but I wouldn't bet anyone else's safety on my ability to do the right thing there. Most of my ride from the airport to Leo's office consisted of me feeling quite sick to my stomach as part of me really wanted the driver to switch to the correct right side of the road. London rush hour traffic and narrow streets with two way traffic probably didn't help.

It was nice to see RJBS get a special show of thanks for his years as pumpking and also to witness the passing of the torch to Sawyer, who will do a fantastic job as he takes over. Also the tradition of publicly thanking the organizers has continued, which is a nice part of the weekend.

I should mention that this year there were no special outings. No video game museum tours, no chance to see how Chartreuse is made. Not even a trip to the set of Downton Abbey. That meant a few extra hours of hacking, bug squashing etc, which is nice too. I'm sure that deep down inside Neil really wanted to take us to a filming of Coronation Street, but he resisted the urge in order to further the goal of productivity.

All in all, I felt it was an extremely productive week for me and for MetaCPAN in general. My sincere thanks go out to the gang for having had me along once again this year.

HTTP::Response may have a different definition of success than you do

This has bitten me before, so I thought it was worth writing about. This RT ticket explains it better than I can, but let me sum things up here.

Consider this code:

99 times out of 100, this will do what you mean. Occasionally it doesn't.

What is the definition of success? In this case it means that there's an HTTP response code in the 200s.

Q: What happens if you've gotten a 200 response code in the server headers but (for example) there's a problem with the response body?

A: is_success still returns true.

is_success gives you a technically correct answer, but in this case technically correct isn't always helpful because something else may have genuinely gone wrong.

Consider what happens in this case where HTML::HeadParser is not installed.

If you want to check for success with confidence, you may want to check the 'X-Died' header as well.

That seems like a lot of work, so I've proposed a workaround that will at least warn on the fact that the 'X-Died' header exists. I don't know what the right answer is, but I do know that the current behaviour is confusing.

Upgrading Business::PayPal::API

I got co-maint on Business::PayPal::API about 3 years ago in order to patch one line that was throwing a warning. The previous release had been 2 years prior to that. So it's fair to say that this module has not lately been on a rapid release cycle. It's still in use, though, and lately there has been some activity on related to it. So, I finally took an evening to sit down and try to cut a new release.

I got the bulk of the work done last night. I had been partially through a conversion to Dist::Zilla. I was able to finish that yesterday. This evening I finally got my head around how to get the tests running. It's a non-trivial process.

There are a huge number of commits that I merged in for this latest release. All of the tests are passing except for one related to searching for "mass pay" payments. It's in t/advanced/TransactionSearch.t. To be honest, that test failure doesn't concern me too much. I don't have a pile of time to invest in this at this point and my main concern is not breaking anything that involves sending and receiving payments. I'm also not sure when this test was actually last passing since most of the tests don't actually run when the module is installed. That all predates me.

I'm pragmatic enough to say that I can either try to fix this test at some point over the next 3 years or cut a new release now and get this thing out the door. I really do want to get this out into the wild. If it turns out there is an actual issue with the mass pay search, I hope someone will pitch in with a fix.

If you rely on this module, please consider giving the TRIAL release a test run. Pull requests are happily accepted.

If I've heard no complaints by Oct 15, 2015 I will make a proper release including these latest changes.

Edit: I just checked the current latest release and TransactionSearch.t fails there as well, so this test failure is at least 3 years old, which means it's not a blocker to putting out a new release.

Stop Writing Your Own Commify Functions

Writing your own commify function may well be right up there with writing your own web framework or templating system. Most of us have done it and it probably wasn't worth the effort. Enter CLDR::Number. (I should note here that it's not obvious from the name that this module will commify for you -- that's one of the reasons I'm writing this up.)

Let's see if we can do this:

Easy, right? Not only does CLDR::Number handle commification (is that a word?) -- it also does this correctly for various locales. If you're trying to internationalize your application, this can save you a lot of time. Let's look at the output:

You can see that the differences are significant enough that you don't want to re-invent this wheel. The fact that we're using a "Decimal" formatter here is a bit misleading. It'll handle whole numbers just fine.

Let's try it with some digits after the decimal place, though.

That works nicely as well.

Now, did I mention that this distribution includes a couple of other formatters? Let's look at currency. That's another tricky one.

The output is:

Crazy, right? I had no idea that representing a price in USD could vary so wildly across locales.

Let's have a look at the last formatter, then.

The output looks like:

We can see here that the percent formatter rounds to the nearest percent by default. We can change that with an argument when creating the formatter.

Our output will then look like:

I could expand on the various arguments a bit more, but this is the gist of it. This is an extremely handy module which saves you from writing your own formatters and makes it easy for you to keep folks across the globe happy with your internationalization efforts.

I've personally already gotten a lot of use out of this code . May thanks to Nova Patch and Shutterstock for releasing it!

Addendum: Choosing the most specific locale

After I originally published this post, Tom Legrady was kind enough to reach out to me to say that relying on "en" as a locale may not always be what you want.  Consider the case of currency.  Let's tweak our example to display various "en" locales.

Let's have a look at the results:

If you look closely, you'll see 4 distinct currency formats just across various "en" locales. I've sorted them by popularity across the various locales. (I should note that I was surprised at the outcome.)

So, if you can get to the most specific, correct locale for a user, it probably makes sense to use that locale. Thanks, Tom!

On the status of HTTP::BrowserDetect

One of the first modules I ever took over on CPAN is the venerable HTTP::BrowserDetect. I remember I was giving a talk at Toronto PerlMongers one night about a UserAgent parser I had worked very hard to build. When I was finished, one of the comments was "Why didn't you just use HTTP::BrowserDetect"? The answer was pretty simple -- I just didn't know about it.

After I got familiar with HTTP::BrowserDetect, I saw that it was on some levels better that what I had written, but that it needed some love. To his credit, the module's author (Lee Semel) gave me co-maint pretty much immediately. I even wrote a blog post about my efforts. It looks like that was only my second ever post on Good times! That was 5 years ago (give or take a week), so now feels like a good time to revisit this topic.

About a year after my initial post, Neil Bowers did one of his very comprehensive module reviews. This time the topic was UserAgent Parsers. If you haven't read it, it's a very thorough and very good analysis of what was out there at the time. That was the good part. The bad part is that it sat up terribly with other modules in terms of how long it took to parse strings. Neil did, however, conclude that it was the best module at the time for recognizing bots.

After he published this, Neil and I had some back and forth about replacing the guts of some or most available UserAgent parsers with a common parser to reduce many of the duplicate efforts. I did put up a few rough ideas in the form of UserAgent::UeberAgent but got distracted by various things in the meantime. That however is a digression.

The one takeaway I want to leave you with is that fact that HTTP::BrowserDetect is alive and well. Over the past 5 years it has seen many, many pull requests. As a result of this, it recognizes even more bots, parses even more UserAgent strings, has better test coverage and has had some significant cleanups in the internals. (It's still kind of weird on the inside, but it's over 15 years old at this point -- I'll cut it a lot of slack).

What actually drove me to write this post is that Windows 10 was set to roll out on July 29, 2015. That's the same date that I got a pull request which adds Windows 10 and Windows 10 Edge Browser detection. I was able to upload a new release to CPAN on the same day.

The cool thing about this is that it's not an anomaly. I get lots of pull requests in a very timely manner. I don't really advertise this module, but people find it and send patches. Because it sees a fair amount of use, it is also kept nicely up to date. It has come a long way since March 16, 2000. It's a teenager, but it's in a good place. It's loved. It's cared for and fed and it still does a great job.

As far as the benchmarking with respect to other modules goes, I'm not sure if Neil will find the time to run another report, but I'd be very interested to see what the current state of affairs is.

There is, however, one thing I am able to benchmark and that is the speed at which I'm traditionally able to merge pull request for this module. Recently I wrote github-mergevelocity, which can tell you how responsive a repository has historically been to attempts at collaboration. Let's see how HTTP::BrowserDetect performs:

60 out of 61 pull requests have so far been merged via the Github "merge" button. On average, it takes 3 days for a pull request to get merged. There are no currently open pull requests and the one request which was closed was handled within 2 days. I'm happy with that record. There are, no doubt, authors who are far more responsive than this, but as an open source contributor, if I get a pull request merged within a week or two of being sent, that seems pretty responsive to me.

This is also a commentary on the quality of the pull requests which I have gotten. They're generally quite good. In cases where some work was needed, it looks like that was done on a fairly quick turnaround as well.

There are still some open issues, so if any of this makes you at all inclined to send a patch or to kick the tires on this small piece of Perl history, I'd be happy to hear about it. 🙂

How I Spent my 2015 Hackathon

On May 2, 2015 I had the pleasure of attending the hackathon, which was hosted at the Bloomberg tower in Manhattan. I was privileged to be one of 5 developers to have their travel and hotel sponsored by Bloomberg L.P. This made attending the event very easy for me. Basically all I had to do was show up at the airport and the rest was taken care of for me!

The event was very well organized, had a great vibe and was very encouraging to newcomers (to Perl and to open source contributions). For my part, I was there to work on MetaCPAN and (hopefully) be there as a resource to anyone else who wanted to contribute to MetaCPAN.

I'm happy to say that I got a number of things done. I was able to fix all of the failing tests on ElasticSearchX::Model. This is a module which MetaCPAN relies on heavily. Going into it, I wasn't sure if the failures were in the code or in the tests. Luckily it was just a problem with the tests, so that was easy enough to fix. I trapped some warnings while I was at it and eventually got a green light from Travis. I got a good chunk of this done on the flight in, so I was able to finish it and release a new version as my first order of business at the hackathon.

Moving forward I continued to work on the MetaCPAN Elasticsearch upgrade, which I was working on at the QA Hackathon. I was able to fix bugs in the module which imports CPAN mirror data into the little known mirror endpoint of the API. I also (mostly) fixed bugs in the module which imports CPANTesters data into the release objects of the API. That still needs some work, but it took a fair amount of digging around.

In addition to this, I worked with MATTP, who added more handy keyboard shortcuts to MetaCPAN. (For example, go to and type "pr" -- that will take you straight to the Github pull requests for this repository). I was able to merge and deploy this change at the hackathon.

I also had some good conversations with RJBS about finding recursive dependencies for modules and graphing them. It turns out he already has a workable solution for this and I don't think converting his code to use MetaCPAN would actually speed things up for him.

I finally met Yanick Champoux, who was a very early contributor to MetaCPAN. I was able to recognize him from the 1/2 of his face which is exposed by his avatar! I should also mention that he helped me find my phone not once, but twice in 24 hours. (I really have to keep better track of it).

I also had a fun dinner with Florian Ragwitz and Augustina Ragwitz. (Florian has been involved with MetaCPAN since it was about six months old).

And, to round out the namedropping, I also met the following folks for the first time: I had an interesting chat with David Farrell about and using Perl6 to parse Pod. Charlie Gonzalez showed me all of the interesting stuff a Fitbit can track and I had a very brief chats with Nick Patch and Peter Martini, whom I basically crossed paths with as I was headed for my ride to the airport.

The facilities were outstanding as was the plentiful food (breakfast and lunch). This was all made possible by the sponsors: Bloomberg, RubensteinTech and

The organizers did a fantastic job with all of this, so I should particularly thank Jim Keenan, Charlie Gonzalez and David Golden and Kevin P. Fleming.

This was the 2nd hackathon. I have a hunch that this means there will also be a 3rd. If you have a chance to attend this hackathon in future, my advice would be do it!