Will Russia replace the Progress cargo space ship by the Argo?

I came across a story of sorts on a website called The Moscow Times that said that Russia was planning to create a reusable rocket to compete with Elon Musk.

I did not know that Elon Musk was a rocket.

The details in the article from 30 September seem to conflict wildly. The rocket becomes a space craft and requires 10 billion USD to design, and should at that price compete with SpaceX’ Falcon 9 in the “carrier rockets market”.

Considering that in its 40 year existence, Progress has flown a little over 150 flights, and considering that according to the article, Argo would have to cost less than 20 million USD per flight, Argo would have to fly 500 times to recoup its design costs. That seems a tad on the optimistic, not to mention unbusinesslike side—normally you would want to recoup costs before that.

Luckily The Moscow Times linked to its source, an article from the same day in a publication called RBC, and even though I do not speak a word of Russian and had to read the whole thing using Google’s clunky translation service, that article seems to make a whole lot more sense.

What Roscosmos, the Russian space agency appears to want to do in the relatively near future (assuming the translation is correct and the publication journalistic), is to replace the current Progress cargo space craft by a new, reusable space craft called Argo for ISS resupply missions.

A secondary use would be to use the craft for up to 30 days as an unmanned orbital research platform that can safely return its cargo.

The Argo is intended to compete with the SpaceX Dragon and indeed looks a lot like it.

Some data extracted (hopefully correctly) from the article:

  • Launch platform: Soyuz 2.1b.
  • Start of programme operation: 2024.
  • Duration of programme operation: 10 years.
  • Expected cost: 10 million USD for launch and landing.
  • Expected costs per 20 flights: 196 million USD, including launch, landing and after-flight maintenance.
  • Expected price: less than 50 million USD per launch.
  • Payload capacity: 11 m3 or 2 tonnes, 1 tonne for return flights.
  • Flight time as part of a station: 300 days.
  • Total mass: 11.5 tonnes.
  • Construction: 52% composite materials.

Like the Dragon, only the capsule part of the space craft would be reusable, with the ‘trunk’ being jettisoned during the return flight.

There are a few things worthy of discussion.

The USA are planning to withdraw from the ISS in 2024. The ISS also has a natural life span; you cannot just put a space station in orbit and assume it will stay intact forever. The ISS was originally planned to last until 2013, but I have seen claims that with the right upgrades it might survive as a viable space station until 2028.

So what do the Russians plan to supply between 2028 and 2034? One observer, Vitaly Yegorov, suggests they might sell supply flights to an upcoming Chinese space station.

And who are the Russians going to compete with? SpaceX’ customer NASA does not plan to stay around that long on the ISS. But are they even considering Roscosmos for supply services? I am currently aware of Roscosmos selling them astronaut ferry services at 80 million USD per seat. That is the lucrative business that is currently under threat from SpaceX and Boeing. ESA and JAXA in the mean time have their own supply craft.

The article also points out that currently there is not much demand for returning goods from the ISS. In that sense, according to Yegorov, the Argo competes with other Russian spacecraft like the Progress-MS and the Soyuz-MS. Yegorov: “Perhaps there will be a need for the delivery of goods to a lunar orbit. And, I think, with a sufficiently powerful rocket, the Argo will be able to make interplanetary flights.”

So what is not clear to me if this Argo spacecraft is merely being designed to bring Roscosmos’ own costs down, or if they actually plan on selling services that use the Argo.

The most boring sport, Formula 1, is using Youtube to get better

I am not going to lie—when I watched Formula 1 in the 1990s, it was mostly because my fellow countryman Jos Verstappen was enjoying a moderate amount of success in the sport.

And when I started watching it again in the 2010s, it was because Jos’ son Max was entering the same sport, heralded as a great talent.

Formula 1, the fastest sport on earth, has a reputation of excess. Fast cars, beautiful women (not as drivers, unfortunately), cosmopolitan cities, and money and champagne flowing richly. Regardless of how deserved this reputation is, the sport itself, when you have stopped looking at everything that surrounds it and sit down to watch a race, is … often a complete snooze fest.

A Formula 1 race is started by the driver who proved himself to be the fastest during the qualification session a day earlier, followed by the second fastest car and so on.

The result is that the line-up on the starting grid is a pretty good predictor of not just who is going to win, but in which order the drivers will finish. Formula 1 races are often little more than glorified processionals.

It is true that the starting grid does not always predict the results. During the race, drivers will meet with accidents and mechanical problems that may throw them back a few places or even remove them from the race; teams that are good at qualifying, which requires being very fast for just a few laps, don’t always manage to bring that same performance for an entire race (‘race pace’); cars are required to pit at least once, which allows for undercuts and overcuts; and there are a thousand other small ways a race can be won or lost–and the fans know what these are.

That makes Formula 1 a (somewhat) enjoyable sport for the initiated. If you know what you are watching, if you can recognise all the tell-tale signs that something special is going on, if you know the ramifications of details as they unfold in front of you. But that also means that in order to get to like Formula 1, you must already be heavily invested in it. And most people start the other way around; they learn about a thing because they like the thing.

Formula 1 has taken to Youtube to remedy this is as good as they can. In good essasying tradition almost, they will extensively show you before a race what is going to happen, they will show you the race as it is happening, and then afterwards they will explain to you what you have seen.

Over the course of the two weeks between races, you can expect to see the following:

  • Five Shocking Moments – looking back at this race in previous years.
  • Circuit Guide – one of the current crop of drivers explains how they approach the track.
  • Drivers Press Conference – 5 drivers answer questions from the press.
  • Highlights from the 3 practice sessions and from the qualification session, one video each.
  • Paddock Pass – Will Buxton explaining the challenges for each team and interviewing a shed load of drivers.
  • F1 Live: the half hour run up to the race broadcast live.

After the race, Formula 1 will publish a video of race highlights and then the recurring features return:

  • Paddock Pass – another episode, this one post-race: reactions from the drivers.
  • Top 10 Onboards – the 10 most interesting radio messages between drivers and their teams.
  • Jolyon Palmer’s Analysis – a former Formula 1 driver dives deep on some of the things that made the race interesting, reviewing video footage.

And then there are videos that aren’t tied to any specific race, but that do work well in explaining how the sport works. In the past month or so we had:

  • 2019 Drivers’ First F1 Wins – what was the first win of the current crop of drivers?
  • Esteban Ocon’s Journey to F1 and Back – Ocon is a former F1 driver who will return next season.
  • How do F1 Drivers Explain F1?
  • Top 10 Cheeky F1 Innovations – innovations that were eventually banned.
  • Grill the Grid – two drivers of the same team quizzed about F1’s past.
  • 2021 F1 Car First Look – the regulations are ever changing and the car designs follow.

(I cannot embed these videos here, so I have linked to some of them above.)

All these features make it so you can get initiated in the sport in your own tempo, which makes it easier to enjoy the sport even if some of the races are, on the surface at least, boring.

Freelance.nl is bijna exclusief voor tussenpersonen (Dutch)

Ik ben een freelance webdeveloper. Dat wil zeggen dat ik als eenpitter en niet op basis van loondienst voor mijn beroep aan websites werk.

Het grootste deel van mijn opdrachtgevers vindt mij zelfstandig of via mijn netwerk. Ik heb echter ook een account op freelance.nl, de grootste marktplaats in Nederland voor freelancers (althans, dat was het in 2015, toen ik dat voor het laatst gemeten heb).

Eind jaren 2000 heb ik gemeten hoeveel opdrachten op freelance.nl door tussenpersonen/recruiters waren geplaatst, en hoeveel door echte klanten. Die meting heb ik herhaald in 2015 en zojuist nog eens (dus in 2019).

De verhouding klanten/recruiters was in:

ca. 2008 – 3:2

2015 – 4:5

2019 – 1:20

Hierbij mijn meting van vandaag van opdrachten voor klanten:

[schermafdruk: Geen intermediairs matchen - 531 matches]

en opdrachten via recruiters (het totaal is inclusief opdrachten voor klanten):

[schermafdruk: wel intermediairs matchen - 23 matches]

Bij dit soort metingen en vergelijkingen hoort een vrachtlading aan kanttekeningen.

Freelance.nl is niet alleen een van de grootste, maar ook een van de oudste nog bestaande online marktplaatsen voor freelancers in Nederland. De site werkt er voortdurend aan zichzelf te verbeteren, maar een resultaat daarvan is ook dat het lastig is om metingen uit 2009 te vergelijken met metingen uit 2019.

De site had bijvoorbeeld ten tijde van mijn meting uit 2015 nog een categorie webdevelopment, tegenwoordig is dat ICT, wat potentieel een veel wijder net is.

Daarnaast kan het best zijn dat de verhouding klanten:recruiters voor bloemschikkers er veel gezonder uitziet.

En zo zijn er nog veel meer redenen aan te voeren waarom deze metingen lastig zijn te vergelijken. Ik ben echter geen wetenschapper, maar een ondernemer, en soms werk je dan met de getallen die je hebt, niet met de getallen die je zou moeten hebben.

Voor mij persoonlijk is deze verhouding relevant. Ik heb nooit via tussenpersonen gewerkt – het zou te ver gaan om uit te leggen waarom, maar heel in het kort komt het er op neer dat perverse prikkels ervoor zorgen dat er bij opdrachten via tussenpersonen enorm veel ruis op de lijn zit, sterker, dat je vaak niet zeker weet of er wel van een opdracht sprake is – en dus maakt het nogal verschil uit of een site voor 70% uit echte klussen bestaat of voor 95% uit klussen waarvan je nog maar moet zien of het wat is.

Er zou nog een verzachtende omstandigheid kunnen zijn als het aantal opdrachten voor webbouwwerk hetzelfde was gebleven in absolute zin, maar dat lijkt niet het geval te zijn. Over ruwweg dezelfde periode gemeten (einde zomer) is het aantal opdrachten in 2019 een kwart van wat het in 2015 was.

Het kan zijn dat ik mijn mening over tussenpersonen moet bijstellen, maar waarschijnlijker is dat freelance.nl een minder opvallend puntje op mijn radar gaat worden.

Update 18 september 2019

Toen ik op een van die zeldzame opdrachten-voor-klanten wilde reageren, viel me de voorbeeldtekst van het reactieveld op:

“Beste recruiter, ik ben de beste kandidaat, omdat…”

Dat is toch echt tussenpersonentaal. Echte opdrachtgevers en echte opdrachtnemers noemen elkaar niet zo. Dus ongeacht de werkelijke situatie (die, zoals gezegd, lastig te meten en te vergelijken is), is freelance.nl blijkbaar een site die zich aan de opdrachtverlende kant voornamelijk als een site voor tussenpersonen ziet.

In English, in short: a popular website that I used to try and find work as a freelancer, has recently seen a large shift from mostly posting work by actual clients to largely posting work by recruiters. Since, in my experience, postings by recruiters rarely represent actual work, this makes the aforementioned website less useful to me.

Possibly crooked judge gets taken off case about definitely bad doctor

The court of The Hague is perhaps not known as the most even-handed in the world. This is the court where large, foreign media conglomerates shop for copyright jurisprudence. This is also the court that committed a crime in 2014 when it advertised for fresh judges, saying that women needed not apply. That was a clear case of discrimination based on gender, although I doubt anyone served even a day’s worth of gaol time for this.

So when this court dismisses a judge for being biased, that probably means something.

In an appeal in a case between Google and a doctor who had mistreated a patient, a judge was dismissed by the court over a possible conflict of interest, Emerce reported today. The plastic surgeon that this was about had been included on a blacklist, Zwarte Lijst Artsen, that bases its information on another, more opaque blacklist called BIG Register.

The people who run Zwarte Lijst Artsen run a companion blacklist on judges called Zwarte Lijst Rechters, which mainly focusses on judges who have helped absolve doctors from malpractice cases. As it happens, the judge from the initial court case, which was won by the doctor, was on this blacklist, so naturally Google appealed.

When it turned out that a judge in the appeal case also was on that blacklist, the court was unimpressed and unamused, and dismissed her.

At the time of the intial case, legal blogger Arnoud Engelfriet opined that the verdict was as expected and unremarkable: “Considering these facts, the verdict does not surprise me. I also would not call it trail-blazing.”

Engelfriets reasoning (refered to above by ‘these facts’) is a little bit hard to follow, so I won’t go into that here. Suffice it to say that if the BIG Register is so hard for average patients to find and peruse that judges see no reason to shut it down, and entries on another blacklist that is apparently transparent and usable are made hard to find, the court is basically saying that blacklists are de facto only allowed if they are unusable. And in my view that is not a fair weighing between the privacy rights of doctors and the rights of patients, and a neglect of one’s judicial duty.

The judge in the appeal case gave as an argument as to why she wasn’t influenced by the fact that she was on a blacklist herself, that the blacklist for judges wasn’t as impactful as the one for doctors. The court felt that argument irrelevant: “[This is not about] the possibility of a subjective impartiality, but about the objectively justified fear for impartiality”.

In other words, the court wasn’t so much worried that the judge might have a conflict of interest as it was that one of the parties would have the feeling that they were not being treated fairly.

The court will now have to appoint a new judge and then the saga of the plastic surgeon and her pals, the possibly crooked judges, can continue.

Test: scaling images up

I was playing around with scaling up images in The GIMP and stumbled upon a method (scale to larger than you need, then scale down to the desired result) that seemed to get exceptionally good results.

I wanted to find out if this was a fluke, so I ran some tests.

My conclusion appears to be either that playing around to find the right method is exactly what you need, or that more tests are needed.

Scaling images up means that if you have an image of a certain size (w × h pixels), you produce a version of that image that is larger (e.g. 2w × 2h pixels).

Unlike what Hollywood shows like to pretend, this does not lead to images of an equal aesthetic. Upscaling an image generally leads to ugliness, so it is your task to find the method that works best. If you have access to a larger original of the image you are about to scale up, it is almost always better to work from that original image.

Upscaling works by inventing new pixels. The algorithm must take guesses as to what such a new pixel would look like. Typically this works by using neighbouring pixels as hints at least somewhere in the process.


Illustration: how do you scale a 2 pixel wide image to a 3 pixel wide one? You could choose to only copy pixels, meaning that the ratio between the 2 halves of the image will become skewed, or you could choose to mix pixels, meaning there will be colours in the image that weren’t there before.

In the following, your browser may itself scale images up or down to make them fit the available space. I chose widths to scale to that should work fine with the current settings of my blog, but you may have to view the images separately to get a real impression of what they look like.

I started this test with two images:

– The source image, 300 pixels wide.

– The comparison image, 600 pixels wide.

Both images were produced by scaling down (method: cubic) from an approximately 1600 pixel-wide original.

The 300 pixel version would be the source of all the upscale tests, the 600 pixel version would serve as the control—as the ideal target.

All tests were performed with The GIMP.

The GIMP has traditionally had three scaling settings: none, linear and cubic.

‘None’ will try and fit pixels into new pixels, duplicating and discarding pixels where necessary. The result will look blocky regardless of whether you are scaling up and down. In my experience, the best use case for ‘none’ is when you are scaling up or down to exact halves, quarters, eights or doubles, quadruples, octuples et cetera.

‘Linear’ and ‘cubic’ are siblings, they mix pixels where necessary, with cubic doing this the strongest. Cubic is brilliant for scaling down.

I used two target widths: 400 pixels and 600 pixels.

(There is no 400 pixel control image, but I trust the 600 pixel image will suffice here.)

I applied the following tests:

none: scale up to the target width using scaling algorithm ‘none’.

lin: scale up to the target width using scaling algorithm ‘linear’.

cub: scale up to the target width using scaling algorithm ‘cubic’.

none + cub: scale up to more than the target width using scaling algorithm ‘none’, then scale down to the target width using scaling algorithm ‘cubic’.

Scaled to 400 pixels wide (factor 1.3)

Scaled to 400 pixels wide using ‘none’:

Scaled to 400 pixels wide using ‘linear’:

Scaled to 400 pixels wide using ‘cubic’:

Scaled to 400 pixels wide by scaling up to 600 pixels wide using ‘none’, then scaling down to 400 pixels wide using ‘cubic’:

Scaled to 600 pixels wide (factor 2)

Scaled to 600 pixels wide using ‘none’:

Scaled to 600 pixels wide using ‘linear’:

Scaled to 600 pixels wide using ‘cubic’:

Scaled to 600 pixels wide by scaling up to 900 pixels wide using ‘none’, then scaling down to 600 pixels wide using ‘cubic’:

My hope had been that the latter would provide the best upscaled images, but to be honest, I do not see much difference between scaling up with the linear setting and the method where you first scale up and over using none, then scale down using cubic. In fact, having done some pixel peeping I think that I prefer—for this test at least—the images scaled up using the Linear algorithm.

(Show here the difference between a linearly upscaled image and an image scaled up using the scale-over-then-down method.)

All images were saved at JPEG quality level 82, for no other reason than that is my default setting.

The difference between a cheapo ‘netbook’ and a high-end laptop is…

… about 450 gigabytes in storage.

[two screenshots]

I was looking for a cheap, small form-factor laptop on a comparison site that lists thousands of them and I found plenty of cheap ones.

When I made the two screenshots above, I had only selected a screen size, and I had sorted the results by price. The left side of the illustration shows Chromebooks and such, with storage between 16 and 64 gigabytes and prices around 150 euros. The thing I changed to get the results on the right (prices around 1,000 euros) was to set the minimum storage to 500 GB.

When I indicated I needed more than just a handful of bytes of storage, the prices sky-rocketed.

Now I know there are more differences than just storage between these two categories, but I don’t need a better screen or a faster processor to watch some videos, write e-mails and read blog posts. Storage would be nice though.

I guess if you want a cheap, small laptop with a decent amount of storage, you will have to swap out the SSD yourself.

American websites improved due to European privacy laws

An interesting side-effect to the introduction of the GDPR, the latest EU privacy law, was that (for Europeans at least) several American websites improved.

Instead of a dazzling and confusing cornucopia of banners and clickables, the sites of USA Today and NPR refocused on their stated goal, i.e. journalism.

See here for two examples:

and

Would you not much rather read the European versions of these sites than the American ones?

The one site that seems confused is Google:

This seems like a link to an article in the LA Times about that same publication suing the city of Los Angeles, but if I click that link, I get a message saying “our website is currently unavailable in most European countries.”

The LA Times has chosen that rather than making a version of its website that does not heavily infringe upon the privacy of its visitors, it will simply show nothing to Europeans.

This is the same Google that for some bizarre reason wants to fine-tune every aspect of my ‘search experience’, to the point that my search results are never the same as anybody else’s results for the same search phrase. Yet they are unable to filter out websites that refuse to show me relevant content.

Privacy audits and GDPR observations

The introduction of the European privacy act known as GDPR seems to have caused a flurry of work in the web development business, but oddly and unfortunately enough I seem to have been immune to this development.

So I decided that I would go through the process of improving one of my own websites, just for practice, and see what I could learn from that. Here is what I found.

So the GDPR is a law from 2016 that builds on earlier attempts by the European Union to anchor privacy as a basic human right for all its citizens. It is an extension, in a way, of the EU’s attempts to turn itself into a vast, wasteful, undemocratic political entity that enormously exceeds its initial scope. Initially the EU was to be an economic union that dealt with things like standardising on electric outlets and shoe sizes.

What the GDPR added to earlier legislation was a bite. From now on, offenders could be hit with significant fines.

Proponents of the GDPR like to claim that the law is based on the principle of privacy-by-design, meaning you need to structure your systems and services in such a way that people’s private lives remain private, and that if you want more from them, you need to get explicit and freely given permission. Let us see how that pans out, shall we?

In the past few months, unless you have been living under a rock, you have been flooded with privacy related messages. These tended to take one of two forms:

  1. The weak: “Please, please, please, please, please let us keep spamming you. We are begging you.”
  2. The strong: “Here is what will happen. You will give us permission to sell all your personal data to the highest bidder, or we will stop our relationship here.”

If the service needs you more than you need it, you would have gotten the former request. But if you need the service more than they need you, let us say the Googles and Facebooks of this world, they get to dictate the terms under which they use your personal data. That doesn’t sound like privacy-by-design to me, that’s just plain old neo-liberalism and greed at work.

So that is what the GDPR is, but for the proprietors of websites it is much more important to know how to comply. The catch-all case for GDPR compliance is, as you have seen, express and explicit consent. A website owner needs to identify all his uses of personal data, explain to a visitor what those uses mean, and then ask permission for those uses.

Luckily there are a number of exceptions where the rights of the proprietors would be unnecessarily burdened if they had to ask for permission. One such exception is a technical necessity: a website would not work if your user had the option of saying no. For example, in order for a web shop to work, you have to be able to ask the visitor for billing and shipping information.

Another exception is freedom of speech. If you are writing an article about someone, you don’t have to ask them for permission before you publish the article.

Keeping data around for legal obligations is a third exception.

The above nicely lays out how you perform a privacy audit. You make three lists:

  1. Which personal data do you process?
  2. For each of these data, which use do you make of them?
  3. For each of these uses, what are your grounds for having them?

Apart from this audit, there are other things you need to do that are beyond the scope of this posting. For example, you also need to determine if you export personal data to foreign countries. (For example, if you are in the Netherlands, do you have Facebook buttons on your website? These buttons collect personal data and Facebook is an American company.) And you also need to determine for each item how long you are going to keep it, and so on.

The meanings of several terms seem obvious at first sight until you are going to perform your audit and then they become vague and confusing.

Personal data are data that can be used to identify a natural person. The logical conclusion might be that nothing then is personal data, because on the internet nobody knows you are a dog. That would make the law toothless and so judges have been using a much roomier definition in which anything that comes close to identifying you can be personal data: names, e-mail addresses, IP addresses and so on. Look out especially for combinations of data. You might argue successfully that an IP address by itself is not personal data, but IP addresses are rarely processed in isolation.

There is a special class of data that gets extra protection, things like gender, age, sexual orientation and so on.

Processing refers to anytime you touch personal data. Collecting contact information is processing personal data. Storing contact information is processing personal data. Sending this information to your e-mail address is processing personal data.

In other words, both ‘personal data’ and ‘process’ are pretty broadly defined.

The website I have been auditing, and for which I have subsequently written a privacy statement, is a Wordpress-based website. Not everything that goes for Wordpress will apply to your website, but I believe several of the lessons I learned could be relevant to any website.

I have identified five elements of a Wordpress website that come into play. If I missed any, please note them in the comments.

  • Wordpress core
  • Plug-ins
  • Themes
  • Widgets
  • Embedded content
  • Hosting

Wordpress core is the base package that you get when you download and install Wordpress on a webserver. If all you used Wordpress for is publish pages and blog posts containing nothing but plain text, you would still be processing personal data.

Plug-ins are pieces of additional functionality created to plug into the Wordpress API (programming interface).

Themes determine the look rather than the functionality of your website.

Widgets are small, very specific pieces of additional functionality that run on top of Wordpress rather than hooking into it.

Embedded content is content hosted somewhere else, but mixed up with your own content. Lots of website owners will for example use the Twitter.com widget to quote tweets in their articles.

A web host is something your Wordpress site runs on top of, and web hosts can collect personal data too. For example, many classic web servers are set up to log every visit by storing the IP address of the visitor, the page they requested and the time of the visit.

There is a strong overlap between plug-ins, themes, widgets and embedded content, to the point where there really is not even that much difference under the hood between a plug-in and a template. The differences are mainly conceptual. For an audit, however, it is useful to treat these as different parts of your website, because your admin interface will typically present these four elements differently.

I spent about 23 hours auditing a fairly simple Wordpress website. In that time I also wrote my privacy policy. That is pretty insanely large amount of time, if you ask me.

Now for me this is business and those are 23 hours well spent, time that will pay itself back in future projects. But what if you wanted a place on the web for your digital soap box, a place for your rantings and ravings? What if I told you that before you set all that up, you were legally required to spend three whole days figuring out in how many (often inadvertent) ways you were going to violate your visitors’ privacy?

What is more, you are exposed to the same multi-million dollar fines as large, wealthy organisations are. So far I don’t now of a country ogrish enough to impose million dollar fines on private bloggers, but hey ho, these are strange times.

Would you still go ahead with that website?

So the GDPR is a huge impediment to free speech, and not only that, but it limits the speech of smaller, weaker parties such as private bloggers far more than it does the speech of large corporations. The GDPR is certainly annoying to the latter, but ultimately acceptable.

But there are caveats to that conclusion.

Breaches of privacy are in itself also huge impediments to free speech. If you are afraid to speak because you are afraid someone will come after you, you may be scared in staying silent.

(The thing is though, will the GDPR make much of a difference here? I do not expect the GDPR to make any meaningful difference to the practice of doxing for example. Twitter is as a processor under no obligation to halt the practice, and the doxers themselves can claim a free speech exemption.)

Also, this is a new law and things need some time to settle in. Wordpress has just released a version of its software that comes with a built-in privacy statement and for which it has already performed the privacy audit part of Wordpress Core for you. If you install no other themes, plugins and widgets, you are almost good to go. (You need to add some info about how you are going to secure your site, how long you are going to keep certain data and so on.)

So there is some hope there.

One man, 50 Bic pens

(An Experiment and a Fantastically Boring Tale.)

In 2011 I bought 50 pens in an attempt to stem the constant trickle of pen disappearances.

Like matching socks, ballpoint pens have this obscure, almost life-like ability to get lost just when you need them, and this seemed to be a good reason to buy way more pens than one man could chew on.

Last week I took a fresh pen from the box, because all the others had disappeared, and it would barely write. Dried up. I tried another from the box. Dried up. And so on.

I counted the dried-up pens I had left: 22.

So the result of this experiment is that a man can live on 28 pens before he must replenish.

A couple of caveats:

  • I regularly get pens from congresses and what have you, so the disappearance rate is probably higher than 30 pens over the lifetime of one Bic.
  • The period between when I bought my Fantastic Fifty and today neatly straddles the divide between when people needed a pen multiple times a day and when people did most of their stuff online or on their phones. In other words, my pen replacement rate has presumably slowed down.

Now for the good news: according to this selection of life hacks, you can bring a ballpoint back to life by using it to ‘write’ on rubber (for example, the sole of a shoe), and I can happily say, this works.

See also:

  • How long can you use a Bic before it runs out of ink?
  • At its introduction in the 1950s, the pen shown here was called the Atomic Pen, but as the Cold War wore on and the lure of a nuclear age quickly dissipated, Bic changed the name to Cristal. The hole in the cap was introduced in 1991 to prevent a user from choking after accidentally swallowing the cap. (NotASource)

Making complex PHP arrays viewable

When you want to study the contents of PHP arrays, for example when you ask the API of your favourite PHP CMS a question and it returns an array in which the answer is somehow hidden, you can use PHP functions like print_r and var_dump to display the array in a way that makes it easy to study.

Let’s say you define the following array:

$foods = array('plants' => array('fruits', 'vegetables'), 'animals' => 'meat', 'mixed' => array('pies' => 'pies'));

then running print_r($foods) will give you the following result:

Array
(
    [plants] => Array
        (
            [0] => fruits
            [1] => vegetables
        )
    [animals] => meat
    [mixed] => Array
        (
            [pies] => pies
        )
)

This improves the readibility quite a bit, because the linebreaks, indentation and added information (brackets for keys, “Array” to indicate the type) all help you to visually parse the array.

When you have large arrays to study however, the usefulness of print_r or var_dump diminishes rapidly. It can get quite tricky to remember the indentation level of an array that spans more than a few screens.

This is where tools like Krumo come in; they will present (within a web page) an array or object (or any value really) within a collapsible format. Only when you click on a top element will it fold out to display its contents.

I needed something like Krumo, but since the latter clocks in at about 100 kilobytes, Krumo itself can become quite complex to work with if you want more than the basics. (Don’t worry if you were thinking about using Krumo, it is still unsurpassed at simply showing objects and arrays.)

Below, I present you what I came up with.

Read the rest of this entry »