meta::hack is back!

For the third year running, we have the privilege of working out of the ServerCentral offices in downtown Chicago in order to hack on MetaCPAN. This year, five of us will be working on improving our corner of the Perl ecosystem. The physical attendee list is follows:

  • Doug Bell
  • Joel Berger
  • Olaf Alders
  • Mickey Nasriachi
  • Shawn Sorichetti

This, of course, would not be possible without the help of our 2018 sponsors: Booking.com and ServerCentral. You’ll notice their logos on the front page of MetaCPAN into 2019 as our way of thanking them for their support.

I’d like briefly to revisit last year’s event, which was quite productive and would not have happened without our 2017 sponsors:

In 2017, we made excellent progress on various fronts. Part of this progress came via an onsite visit from Panopta, our monitoring sponsor. They have a Chicago office and were very responsive when I asked them if they’d be willing to send someone down to chat with us about improving our Panopta configuration. So, Shabbir kindly dropped by, giving us in-person advice on how to configure our Panopta monitoring. One of the most helpful tweaks was setting up an IRC integration, so that outages are now broadcast to #metacpan on irc.perl.org.

Here’s a photo of Shabbir’s visit. (I wasn’t able to attend in person last year, but I was able to do so remotely. I’m the one watching from the wall.)

meta::hack v2 in 2017

From left to right:

Doug Bell, Thomas Sibley, Nicolas Rochelemagne, Graham Knop, Leo Lapworth (front), Joel Berger, Brad Lhotsky, Olaf Alders (pictured on monitor), Shabbir Karimi (from Panopta) and Mickey Nasriachi.

I’ll post more about our progress over the next few days. We’re working on some fixes and something new and shiny too. Wish us luck!

How lazy am I?

Occasionally I find myself running some random Perl script from a Github gist or dealing with some code from a colleague that doesn’t have proper dependency management (yet). It’s a bit painful to

  • run the script
  • wait for it to die on a failed dependency
  • install the missing dependency
  • re-run the script
  • wait for it to die
  • install the missing dependency
  • rinse
  • lather
  • repeat

Being lazy

I was aware of Acme::Magic::Pony. It already solves this problem, but it uses cpan for the installation. lib::xi also solves this problem, but does this via cpanm. (I was only made aware of lib::xi later). I’ve lately been using App::cpm to install modules, so I wanted a solution that would also use App::cpm. I came up with something which I’ve been using for my own needs. I’m now ready for the general public to tell me how bad of an idea it really is. Drum roll please…

I give you lazy.

Let’s take an example script, called foo.pl:

If you are missing an installed module, your output might look something like:

Let’s try re-invoking this script, but this time with lazy:

perl -Mlazy foo.pl

This will attempt to install any missing dependencies in a place where they are globally available to you. If all goes well you’ll be installing dependencies and running your code all in one invocation of your script. Does it work? Let’s test it.

It did work and it was pretty painless. lazy noticed that a module (Open::This) was missing, so it installed it, loaded it and then allowed the script successfully to run to completion.

(Note that the above script is giving you the arguments which you could pass to an editor, like vim to open up LWP::UserAgent at the line where sub new is defined. I hope to get to Open::This in a subsequent blog post.)

That’s enough information to make you dangerous. At this point you can stop reading and get on with your life. If you’d like to see some more advanced use cases, read on.

Using an arbitrary local::lib

If you’d like to keep the new dependencies in a sandbox, you can do this:

perl -Mlocal::lib=my_sandbox -Mlazy foo.pl

This will install the missing modules to the my_sandbox folder and it will also use this folder to find them. Your install might look something like:

But wait, there’s more!

Using a default local::lib

If you’re too lazy to specify a local directory, but you still want to use a local lib, lazy will use whatever your first default local::lib install location is:

perl -Mlocal::lib -Mlazy foo.pl

In my case it looks like:

But wait, there’s more!

Passing args to cpm

If you want to pass some extra flags to cpm you can do this by passing them to lazy:

perl -Mlazy=-v foo.pl

This switches on cpm‘s verbose reporting. The first part of the output now looks like:

Now, if you also want to remove the colour from this reporting, that’s easy enough:

perl -Mlazy=-v,--no-color foo.pl

Maybe you want to install man pages too:

perl -Mlazy=-v,--no-color,--man-pages foo.pl

Check cpm --help for a more exhaustive list of available options.

Anything you can do via command line switches, you can also do directly in your code.

In Your Editor

This whole thing got me thinking about a vim integration. In my case I’ve added the following to my .vimrc:

nnoremap <leader>l :!perl -Mlazy -c %:p

This essentially means that in my editor, if I type ,l then vim helpfully takes the name of the file I’m currently working on (Foo.pm) and runs perl -Mlazy -c Foo.pm. Since lazy will still be invoked when run via -c we don’t even have to run code to install modules. We can essentially just run a compile check and this sets the module installation chain of events into motion. I can install missing modules without leaving my editor.

Now, this is where the pedants among you speak up and say “but -c does run code in BEGIN and CHECK blocks”. See more discussion at https://stackoverflow.com/a/12908487/406224. Essentially code could be run under -c and it’s helpful to be aware of this, for security reasons.

In Your Linter

As an https://github.com/w0rp/ale + vim user, I’ve been using SKAJI’s syntax-check-perl plugin for https://github.com/w0rp/ale. Since this linter runs your code via -c, adding a use lazy; to your code will install your CPAN modules on demand, every time your linter runs. So, it’s possible for your modules to be installed asynchronously as you happily go about writing your code. Just be sure to add a use lazy; to the code you are writing. This statement must be before any modules which you would like to be auto-installed.

Wrapping Up

You get the idea. I’d be tickled if you decided to give lazy a spin and then headed over to Github to tell me all the things that it doesn’t do correctly. 🙂

Perl Toolchain Summit 2018 Wrap-up Report

Perl Toolchain Summit 2018 Wrap-up Report

Getting There

This year I had the pleasure of attending the Perl Toolchain Summit in Oslo, Norway. Because of a conflict in my schedule, I initially didn’t think I’d be able to attend. After a lot of mental back and forth, I decided I’d try to make it work. The compromise was that this year I would leave on Sunday morning rather than on Monday. That meant I wasn’t able to participate in the last day of the summit, but I’d still have 3 entire days to get things done.

I left Toronto on Tuesday evening and arrived in Frankfurt on Wednesday morning. I got breakfast in Germany and after a two hour layover I was on a short flight to Oslo. From the Oslo airport I took a train into town.

I got into my tiny, tiny room and changed into my running clothes. I spent the next couple of hours running mostly along the water and making lots of stops along the way to take pictures and hang out. That was enough to burn off any excess energy.

In the evening I met up with everyone else at a bar called RØØR. Here it finally became clear to my why Salve had said alcohol would be expensive. (I hadn’t really believed him when he originally let us know). I had a great time regardless and it was nice to see a lot of familiar faces and catch up.

Day One

On Thursday morning we had an 8:30 breakfast at the venue and then 9:00 AM standup, where everyone had a chance to introduce themselves and talk about what they intended to work on. Leo and Mickey had a claimed a small room for our group, which meant that for the span of the summit we would all be able to sit around the same table and whiteboard. Some folks dropped by early in the morning to talk about a few items on their wish list. We also made a game plan for the summit and made various work assignments amongst ourselves. These sorts of face to face meetings are what make the summit invaluable. The first morning meeting allowed us to get everyone on the same page and then get to work.

This year we didn’t come into the summit with any one grand project to complete. Rather, we spent some time on smaller items, bug fixes, attending to outstanding issues on Github and infrastructure work. Specifically, we’re generally working towards having a better disaster recover plan and failover capacity for MetaCPAN. Leo did a lot of work on that, so I’ll let him blog about his contributions. For my part, I picked back up on the work that I had started at meta::hack this past November.

At meta::hack I had spent a good chunk of time upgrading our Debian from Wheezy to Stretch. This is an upgrade of two major releases and it turns out to be non-trivial in our case. A large part of the problem is that with our current setup we are on a version of Puppet (3.8.7) which is no longer supported. Since Wheezy has a newer Puppet (4.8.2) by default, I took this as an opportunity to upgrade Puppet. This meant that some of our Puppet config needed updating and that we also need to update our PuppetForge modules (3rd party dependencies). This was complicated enough that I was going to need a lot of help from Leo, so I sat myself beside him in order to make bothering him more convenient for the both of us. It turned out that I bothered him a lot. I shudder to think how long it would have taken me to work through this remotely. I can’t say that I enjoyed the work, but it was much more enjoyable to do this face to face and it was a much better use of everyone’s time.

Thursday also happened to be Leo’s birthday so we went out for drinks and various foods made of pork at a bar called Crowbar. Marcus Ramberg and Batman joined us there and eventually a good chunk of the attendees came along as well.

Day Two

I had gotten the bulk of the upgrade done on Thursday. On Friday I got up early and went for a run. Then, at the venue we continued tying up some loose ends on the Debian/Puppet upgrade and getting Leo involved in testing it out. I should add that part of this upgrade also included an overhaul of how we build our Vagrant boxes. Currently if you want to deploy a Vagrant instance you download a pre-built box which we have uploaded to one of our servers. Moving forward you’ll be able to build your own box on demand. It will take slightly longer to get set up, but you’ll have the latest and greatest software right away and you won’t need to rely on any one of us to upload an updated box for you to download. In the evening we all went out to a restaurant where there were speeches, excellent food and great conversations.

Day Three

On Saturday morning, Mickey and I met up at 6:30 AM for a quick run. At 7:30 AM we met up with Leo and Sawyer and rented some bikes. We cycled along the waterfront by the fortress and the Opera House before making it back just in time for standup. By now we were upgrading Debian and deploying the new Puppet on two of the three machines which Bytemark generously donates to us. This turned out to be non-trivial as the first machine we upgraded would not reboot. After some excellent technical support from Bytemark we were able to get past this as well.

I also added a URL mapping for static files to the MetaCPAN API. This is a first step in self-hosting some API documentation, likely using Swagger in some way. This is partly what’s preventing the MacOS Dash application from having more Perl documentation available.

In the evening most of us hung out at a local hacker/maker space. For me, it was a nice way to wrap up the summit.

In General

While all of this was going on, we also had a good debate about sponsorship of our project. We’ve been getting a lot of interest from potential sponsors over the last year or so and we’ve mostly been approaching it on a case by case basis. Having everyone in on a group discussion allowed us to set a sponsor policy moving forward that we think will be good for both us and our sponsors. (If your company would like to sponsor MetaCPAN or our next meta::hack conference, please don’t hesitate to get in touch with me).

I also fit in the usual code reviews over the 3 days and also tried to catch up on some outstanding Github issues. It turns out I had missed some entirely. There were issues going back as far as January which had not yet been responded to. 🙁

As part of some of the other work going on, there was progress made towards a tighter integration of the CPAN river data in the MetaCPAN API. Also, we now have access to Neil Bowers’ CPAN river data generator. There’s a plan to have MetaCPAN directly generate this data, rather than having us pull the data from him.

There was also a fair bit of discussion as to where to host the 2018 meta::hack. Our TODO list is long enough that the Perl Toolchain Summit isn’t quite enough for us to make the progress that we think we need to make. So, hopefully, we’ll be able to get together once more this year to continue to make progress.

Additionally Babs Veloso showed us a proposal for a redesign of the MetaCPAN author pages. It looks great. Now we just need to find someone to implement it. We also spent a fair amount of time on a part of the project which I can’t actually tell you about just yet, mostly because the details haven’t all been finalized yet. If all goes as expected, we’ll announce that in the coming months.

Getting Home

My flight back early on Sunday was uneventful. I spent an hour in Copenhagen and then was back home on Sunday afternoon in time to take the kids to the park. I got into an Eastern Standard Time sleep schedule by Sunday night. I dropped the kids off at school on Monday morning and then flew off to Boston for meetings until the following Friday. Since I telecommute, I rarely need to travel (or even wear shoes), but for some reason I ended up with back to back trips this year. In the end, it all worked out.

Thanks

On the whole this year’s PTS was very productive, fun and very well organized. I’d like to thank Salve J. Nilson, Neil Bowers, Stig Palmquist, Philippe Bruhat, Laurent Boivin and anyone else I may have missed. The venue, food, level of organization (and hoodies) were all excellent. It was a smooth experience from start to finish. I feel like I was able to get a lot of good work done and I am particularly pleased that I was able to finish what I had set out to do. Collectively, there was much more work completed on MetaCPAN than what I’ve described.

I would not have been able to make this trip without the support of MaxMind (my employer). MaxMind has sent me to PTS without hesitation each time that I’ve been invited and has also been a sponsor of the event for these past two years. Thanks, MaxMind!

Additionally, I’d like to thank all of our sponsors, without whom this would not have been possible:

MaxMind,
NUUG Foundation,
Teknologihuset,
Booking.com,
cPanel,
FastMail,
Elastic,
ZipRecruiter,
MongoDB,
SureVoIP,
Campus Explorer,
Bytemark,
Infinity Interactive,
OpusVL,
Eligo,
Perl Services,
Oetiker+Partner.

WWW::Mechanize Best Practices

fields

Recently at $work we were discussing some of the behaviours of WWW::Mechanize when submitting forms. For instance, when you pass the fields parameter to the submit_form() method, Mechanize might take a very lax approach to submitting your data. Imagine the following form:

Now take the following code:

Mechanize will happily post *all* of these fields to the form for you, even though the form doesn’t contain an input with the field “C”. If there is no server side validation which checks for unknown fields, you’ll likely get a 2XX status code in your response and all will appear to be well. This can lead to some confusing and hard to debug situations, especially if you’ve done something as subtle as misspelling an input name. You could be banging your head against the wall for quite some time.

with_fields

You can protect yourself from this scenario via with_fields, which will only submit forms which contain all of the provided fields.

If you try to run this code, Mechanize will die with a message like “There is no form with the requested fields at…“. This is already a big improvement.

(Note that form_id is optional in this case. If you leave it out then Mechanize will only look for a for which contains *all* of the fields provided by with_fields. If we provide form_id then Mechanize will want a form which matches the provided id and which also provides all of the required fields).

strict_forms

Can we do any better than this? You bet. We can supply a strict_forms parameter to submit_form. This switches HTML::Form’s strict behaviour on. From HTML::Form’s docs:

Initialize any form objects with the given strict attribute. If the strict is turned on the methods that change values of the form will croak if you try to set illegal values or modify readonly fields.

That’s above and beyond what with_fields brings to the table. Handy stuff. Try to do this wherever possible if you want to make your life easier.

If strict_forms finds a problem, your code will die with something like “No such field ‘C’ at…“.

Notice that the above example uses fields and not with_fields. I would encourage you to use the above incantation rather than the one below whenever possible. The reason is that with_fields will throw an exception *before* strict_forms. So, in many cases you’ll end up with the less helpful error message — the one which doesn’t name the offending field(s). You may not be able to use with_fields in all cases, so when you do have to use with_fields keep in mind that strict_forms can still find additional issues (like trying to set readonly fields). My advice would be to use strict_forms whenever possible.

autocheck

autocheck can save you the overhead of checking status codes for success. You may outgrow it as your needs get more sophisticated, but it’s a safe option to start with. Consider the following code:

This code doesn’t die or warn, since it assumes you will “do the right thing” by checking status codes etc. Now try this:

If you run the above code you’ll get something like “Error GETing foobar.comcom: URL must be absolute at…“, which can also save you a lot of heartache.

HTTP::CookieJar::LWP

You are encouraged to install Mozilla::PublicSuffix and use HTTP::CookieJar::LWP as your cookie jar. This allows you to take advantage of HTTP::CookieJar which is more modern than HTTP::Cookies and “adheres as closely as possible to the user-agent rules of RFC 6265“.

See also https://metacpan.org/pod/HTTP::Cookies#LIMITATIONS

protocols_allowed

How about restricting the protocols your agent might follow? Let’s look at protocols_allowed:

This option is inherited directly from LWP::UserAgent. It allows you to whitelist the protocols you’re willing to allow.

This will prevent you from inadvertently following URLs like file:///etc/passwd

protocols_forbidden

If you don’t want to whitelist your protocols, you can blacklist them instead. Unsurprisingly, this option is called protocols_forbidden:

This option is also inherited directly from LWP::UserAgent. It allows you to blacklist the protocols you’re unwilling to allow.

This will prevent you from inadvertently following URLs like file:///etc/passwd

Creating a Stricter Agent

If we put together all of these together we get something like:

Using these settings in conjunction with strict_forms can help you with debugging, security and also your own sanity.

Making it Official

Much of what has been discussed here is now documented at https://metacpan.org/pod/WWW::Mechanize#BEST-PRACTICES.

Edit: It has been pointed out to me that readers of this article may also benefit from my previous UserAgent Debugging Made Easy post.

See Also

If WWW::Mechanize does not fit your use case, see also WWW::Mechanize::Chrome, WWW::Mechanize::Firefox, WWW::Mechanize::Cached, LWPx::UserAgent::Cached and Mojo::UserAgent.

My “Go for Perl Hackers” Cheatsheet

Last year I found myself working on some Go code at $work. When I’m trying to pick up constructs in a new language, I find it helpful to see how I would have done the same things in Perl. This sheet is far from complete, but I think it’s already helpful. You can find it at https://github.com/oalders/go-for-perl-hackers. Comments, critique and pull requests are welcome. I’ve already had some helpful feedback via Twitter which I’ve incorporated.

As an aside, I did this a long back for Objective-C.

New defaults for Perl Linting in Vim’s Ale Plugin

In my previous post, I talked about using Ale with vim for linting and syntax checking. Since that time, the Ale defaults for Perl have changed. Perl::Critic checks are still on by default, but you will need to enable the syntax and compile checks that are run via the perl binary itself.

The reasoning for the new default is described in the Ale docs. If you want to (re-)enable your Perl checks, you can follow the example in my dot files.

vim, Ale, Syntastic and Perl::Critic

As a vim user, I’ve used Syntastic for a long time. It’s a great tool for syntax checking. However, I was recently introduced to Ale. Ale does a lot of what Syntastic does, but it does it asynchronously. The practical benefits are

  • You should experience less lag when editing large files
  • Ale flags problematic lines containing errors and warnings in a gutter, making it easy to find problems
  • Detailed information about errors and warnings appear at the bottom of your buffer

I may actually be underselling it. Ale is almost a drop-in replacement for Syntastic. (At least it was for me). Try it out. I don’t think you’ll go back to Syntastic once you’ve tried Ale.

Ale Configuration

let g:ale_perl_perl_options = '-c -Mwarnings -Ilib -It/lib'

This is what I’m currently using. (Note the Ale defaults to -c -Mwarnings -Ilib). Often I’m working with a t/lib directory. Having this included by default means less chance of my code not compiling when it’s run by Ale. Of course, pass in any flags which you may require here. You’ll likely want to keep the -c since you want to compile the code rather than run it. Keep in mind that code in BEGIN blocks will still be executed under the -c flag, so there can be security implications to opening random Perl files with this setting. Explicitly enabling the warnings pragma at the command line will cover you for cases where you haven’t already enabled warnings in your code. Your needs may vary depending upon your environment, so keep in mind that your vimrc can source other files. You can add something like the following to your vimrc:

source ~/.local_vimrc

Perl::Critic Configuration


let g:ale_perl_perlcritic_showrules = 1

When this is enabled, you’ll be shown which Perl::Critic rules which have been violated by your code. This helps you to a) fix the issue or b) copy/paste the rule class name so that you can whitelist the code in question.  For example, you may be told that you’re violating Perl::Critic::Policy::Modules::ProhibitEvilModules.  If you feel you need to embrace the evil, this makes it easy to add a ## no critic (ProhibitEvilModules) to the code in question.   Trust me, this is much easier than digging around to figure out exactly which policy you’ve violated.

Lastly, the default behaviour of Ale’s Perl::Critic linting is to display all violations as errors. I find this confusing, because if something is not preventing my code from compiling, I do not consider this to be an error. In order to set Perl::Critic violations to be displayed as warnings, just add the following to your .vimrc:

let g:ale_type_map = {
\ ‘perlcritic’: {‘ES’: ‘WS’, ‘E’: ‘W’},
\}

The above is a map, so you can add config options for other Ale linters to this map as well.

Unfortunately, the current incarnation of Ale can’t tell the difference between a Perl error and a warning. I have started a pull request to try to improve this somewhat, but I got stuck on the Docker end of things and haven’t gotten back to it yet. If you’d like to pitch in, I wouldn’t mind the help. 🙂

Announcing meta::hack v2

It’s that time of year again. We did a bunch of hacking on MetaCPAN at the Perl Toolchain Summit and we got a lot done, but it wasn’t nearly enough. Our TODO list never gets shorter and there are lots of folks willing to pitch in, so today I’m announcing that meta::hack v2 will take place from Nov 16-19, 2017 at Server Central in Chicago. As a reminder of how things went with meta::hack v1, please refer to my wrap-up report from that event.

The attendees this year are:

This group basically represents the MetaCPAN core contributors. We are restricting meta::hack to this core group again purely so that we can focus on getting the maximum amount of work done over the short time that we have together. If you are interested in joining a future meta::hack, now is a good time to start contributing to the project. Our intention is not exclude willing participants, but to keep the group limited to people who are already up to speed on contributing to the project. Having said that, if any hackers in the Chicago area want to hang out, we’re happy to go out to dinner with you while we are in town.

An event like this does cost money and we are still looking for some more sponsors. Some real estate on the front page of MetaCPAN will be used to recognize sponsors. If you or your company are interested in supporting this event, please contact us so that we can send you a copy of the sponsor prospectus.

How I Spent my Perl Toolchain Summit 2017

This was my 5th year of being invited to participate in the Perl Tool Chain Summit (formerly Perl QA Hackathon). It was a real pleasure to be invited to a rebranded version of the same helpful event.

Our Sponsors

This event would not have been possible without our sponsors. Let me take a moment to thank:

Overview

For the second year in a row, MetaCPAN was well represented at the event. This is important because it really does allow us to get much more work done. Getting everyone in the same room allows us to make decisions quickly and often deploy new ideas on the same day they’ve been discussed. Just like last year, we got a lot accomplished this year. I’m only going to touch on the work which I was directly involved with. Leo Lapworth, Mickey Nasriachi and Graham Knop both did much more than I’ll cover here.

I’m going to tackle this in roughly chronological order.

Wednesday

I arrived on Wednesday evening, and met up with most of the others at a local restaurant. It was good to see familiar faces and talk about what people had planned for the coming days.

Thursday

Thursday was the first real day of work. I had gotten the 06perms index into MetaCPAN API as part of my preparation for PTS, but it wasn’t yet exposed via the UI. So, on the Thursday I made it my mission to begin the front end work. I also spoke with Paul Johnson and Leo Lapworth about tighter integration of cpancover.com into MetaCPAN. I think this is important because many people probably don’t know about cpancover.com. This site runs coverage tests on all modules at upload time. It’s a great service and something that more people should probably know about.

In the evening we went out as a group to an Ethiopian restaurant. That was a first for me and I really enjoyed it.

Friday

On Friday we went live with the 06perms work from the previous day. Once it was up and running, Neil suggested I write a blog post to get the word out about the new feature.

The module permission views have already allowed people to spot inconsistencies in some module permissions. I had some back and forth with Babs Veloso and Neil about the permissions UI. We’ll be improving the UI moving forward, but it was important to get something out there now rather than perfecting it first.

I took a few minutes to release a new Net::HTTP which consisted of a patch from Shoichi Kaji, who was sitting just a few feet away from me at the time. I did this after a code review from Karen Etheridge, who was sitting in the next room.

I also got in a pull request to change some of the copy on the show more/less links for large Changes files.

Saturday

At meta::hack this past Fall, Joel Berger had worked on moving the MetaCPAN module search from the front end to the API, so that anyone could use it. We quietly merged his changes earlier this year, but the endpoint was lacking in tests and documentation. This was no fault of Joel’s. Leo set about to document this new endpoint and I spent part of my Saturday helping with this. He wanted to make some code changes, but didn’t want to do this without existing unit tests. So, I wrote some passing unit tests for the master branch and he used them in his documentation branch to establish that nothing was being broken by his cleanup work. Now we have better test coverage, some more documentation and better organized code.

While I was working on the tests, I came to the realization that our Travis build times were going to be very problematic. Some of our builds were taking up to 45 minutes or longer. This isn’t a huge issue on normal days where you can push something and come back to it later in the day. However, at a hackathon, where many people are pushing many branches, it becomes a significant constraint. Waiting 30-45 minutes for a passing build before a branch can be merged can be a real problem.

The TLDR is that build times are now in the 5-10 minute range for both the front and back end code. That was a fun journey for me and will be the subject of its own blog post. There’s some information in there which can be helpful to other projects.

As part of my efforts to speed up the build, it was helpful to have Shoichi Kaji sitting at the next table. I had a question about App::cpm and he helped me with a code sample and an explanation immediately. That allowed me to shave 50% off the build time for installing CPAN modules. I also had an issue with a test suite which needed some files to run sequentially and some in parallel. Graham Knop had an answer for me within minutes and Leon Timmermans came over a bit later to help me get the implementation right. This helped me squeeze a lot of extra performance out of the build in a very small amount of developer time.

I also took some time to add some missing meta::hack sponsors to the MetaCPAN sponsors page. I also got a very old pull request merged which fixes a bug that showed user favorites for modules which had been deleted.

Saturday evening a group of us walked into the old part of town. Leo Lapworth, Nicholas Rochelemagne, Joel Berger and I ended up a really fun little restaurant called Chez Sylvie, which offers regional cuisine. Some of us had eaten there three years ago and I had good memories of it. I’m happy to say that it did not disappoint.

Sunday

By Sunday, now that we had faster Travis builds, I took a proper look at the build logs and it became clear to me how noisy our build output was. It was really hard to distinguish between real problems and noise that just needed to be suppressed. I spent a fair bit of time working on that. I’m happy to say that build logs are now extremely succinct. Moving forward it will be much easier to spot warning messages as they appear and deal with them immediately. I also spent time with Leo troubleshooting MetaCPAN search results on the staging machine.

I had a discussion about producing a new, cleaner 02packages file for PAUSE. This would be an additional file which PAUSE produces, which I’m sure will be the subject of a blog post from someone else. As a MetaCPAN developer group, we had additionally had discussion about logistics for meta::hack v2, which we’re hoping to hold later this year. Exact date and location have yet to be determined.

I got on TGV from Lyon to Paris to finish off the day.  While on the train I spent some time partially implementing CPAN river data into the MetaCPAN UI.

Added to all of this, Mickey kept me busy every day reviewing code from his pull requests. He was also a valuable resource in helping me to get my own work finished faster.

I’d also like to send a special thankyou to Travis CI. Their customer support was excellent. I had told them we were at a hackathon and having some build issues and they immediately sprang into action. They played a big part in our weekend being even more productive.

Finally, I’d like to thank my employer for giving me time to attend the summit. To me, it feels like on the job training. I learned a great deal while I was hacking and/or chatting with other attendees. I’ll be able to put this knowledge to use on various open source and $work projects as I move forward.

Postscript

I spent my Sunday evening walking around Paris.  I walked from Notre Dame up to the Eiffel Tower, mostly along the Seine.  I got up early the next morning for a run.

I then caught a plane from CDG to FRA.  At FRA I got into a plane bound for YYZ.  After an apparently successful takeoff from FRA we were informed that the flight behind us had spotted liquid leaking from our aircraft.  The flight crew had then ascertained that one of the hydraulic systems was no longer functional.  So, we had to dump 54 tonnes of fuel before landing back at FRA with loads of emergency vehicles waiting for us on the ground.  We then spent probably a couple of hours in the aircraft while crews assessed the damage.  No air conditioning or meals were available during this time and eventually we all piled into buses that brought us back to the terminal.  Then into another bus that brought us to a fancy airport hotel where there was a warm meal waiting for us.  I got a text that my flight was rescheduled for 18:30 the next day.  For various reasons my 18:30 flight actually left the ground at 22:30.  So, I got to YYZ on Wednesday at 00:30 rather than on Monday at 19:30 as originally planned.

I’m glad we all got home safely.  The way the airline handled it was mostly fine.  It was a bit weak on communication, but it otherwise decent.  My big takeaway is that I’ve learned for next time to pack more stuff that I think I might need into my carry on rather than my checked bag.  🙂

Viewing Your Module Permissions on MetaCPAN

We’re currently at the Perl Toolchain Summit in Lyon, working hard on improving MetaCPAN. One feature which we went live with yesterday is a view on CPAN module permissions. This means that you can now easily see which modules any CPAN author has permission to upload.

If you want to see every module which Neil Bowers has permissions on, you can go to https://metacpan.org/permission/author/NEILB. You can get to this page via the module permissions link on the left sidebar of a MetaCPAN author page.

To see who has permissions to upload a particular module, you can use the module view. This is not yet linked from MetaCPAN pages.

To see who has permissions to upload a particular distribution, you can use the distribution view. You can get to this page via the upload permissions link on the left sidebar of a MetaCPAN release page.

This new feature is helpful in a couple of ways. Firstly, if you’re looking for someone to patch and release a module for you, you can now easily view everyone who has permissions to do this. It’s tempting to believe that the last person who released a module is responsible for it, but the reality is that in many cases there are several people who can upload a new release. This helps shift the burden from one person to multiple people in many cases. In practice, if you want to chase someone to upload a new version of module X, you now can easily find the canonical list of responsible people.

Secondly, if you would like permission to begin uploading a certain module, you can now easily find the module owner. Only the module owner can assign co-maint to you. In the past I’ve made the mistake of contacting the last uploader and asking for co-maint. What I should have done is contact the person who actually is the owner. This can save you some running around trying to contact folks and waiting for replies from the wrong people.

Lastly, you can now audit module permissions. You may notice when looking at permissions that there are authors who should not have co-maint on a module. Or you may notice that authors have co-maint on some modules in a distribution but not on others. Having inconsistent permissions on modules in a release can lead to problems when a distribution is released by an author who has some missing permissions.

So, please do have a look at your permissions and give them a sanity check. If you notice a problem with a module which you don’t own, contact the PAUSE admins at [email protected] and they’ll be happy to work with you to sort out the correct permissions.