Will Russia replace the Progress cargo space ship by the Argo?

I came across a story of sorts on a website called The Moscow Times that said that Russia was planning to create a reusable rocket to compete with Elon Musk.

I did not know that Elon Musk was a rocket.

The details in the article from 30 September seem to conflict wildly. The rocket becomes a space craft and requires 10 billion USD to design, and should at that price compete with SpaceX’ Falcon 9 in the “carrier rockets market”.

Considering that in its 40 year existence, Progress has flown a little over 150 flights, and considering that according to the article, Argo would have to cost less than 20 million USD per flight, Argo would have to fly 500 times to recoup its design costs. That seems a tad on the optimistic, not to mention unbusinesslike side—normally you would want to recoup costs before that.

Luckily The Moscow Times linked to its source, an article from the same day in a publication called RBC, and even though I do not speak a word of Russian and had to read the whole thing using Google’s clunky translation service, that article seems to make a whole lot more sense.

What Roscosmos, the Russian space agency appears to want to do in the relatively near future (assuming the translation is correct and the publication journalistic), is to replace the current Progress cargo space craft by a new, reusable space craft called Argo for ISS resupply missions.

A secondary use would be to use the craft for up to 30 days as an unmanned orbital research platform that can safely return its cargo.

The Argo is intended to compete with the SpaceX Dragon and indeed looks a lot like it.

Some data extracted (hopefully correctly) from the article:

  • Launch platform: Soyuz 2.1b.
  • Start of programme operation: 2024.
  • Duration of programme operation: 10 years.
  • Expected cost: 10 million USD for launch and landing.
  • Expected costs per 20 flights: 196 million USD, including launch, landing and after-flight maintenance.
  • Expected price: less than 50 million USD per launch.
  • Payload capacity: 11 m3 or 2 tonnes, 1 tonne for return flights.
  • Flight time as part of a station: 300 days.
  • Total mass: 11.5 tonnes.
  • Construction: 52% composite materials.

Like the Dragon, only the capsule part of the space craft would be reusable, with the ‘trunk’ being jettisoned during the return flight.

There are a few things worthy of discussion.

The USA are planning to withdraw from the ISS in 2024. The ISS also has a natural life span; you cannot just put a space station in orbit and assume it will stay intact forever. The ISS was originally planned to last until 2013, but I have seen claims that with the right upgrades it might survive as a viable space station until 2028.

So what do the Russians plan to supply between 2028 and 2034? One observer, Vitaly Yegorov, suggests they might sell supply flights to an upcoming Chinese space station.

And who are the Russians going to compete with? SpaceX’ customer NASA does not plan to stay around that long on the ISS. But are they even considering Roscosmos for supply services? I am currently aware of Roscosmos selling them astronaut ferry services at 80 million USD per seat. That is the lucrative business that is currently under threat from SpaceX and Boeing. ESA and JAXA in the mean time have their own supply craft.

The article also points out that currently there is not much demand for returning goods from the ISS. In that sense, according to Yegorov, the Argo competes with other Russian spacecraft like the Progress-MS and the Soyuz-MS. Yegorov: “Perhaps there will be a need for the delivery of goods to a lunar orbit. And, I think, with a sufficiently powerful rocket, the Argo will be able to make interplanetary flights.”

So what is not clear to me if this Argo spacecraft is merely being designed to bring Roscosmos’ own costs down, or if they actually plan on selling services that use the Argo.

The most boring sport, Formula 1, is using Youtube to get better

I am not going to lie—when I watched Formula 1 in the 1990s, it was mostly because my fellow countryman Jos Verstappen was enjoying a moderate amount of success in the sport.

And when I started watching it again in the 2010s, it was because Jos’ son Max was entering the same sport, heralded as a great talent.

Formula 1, the fastest sport on earth, has a reputation of excess. Fast cars, beautiful women (not as drivers, unfortunately), cosmopolitan cities, and money and champagne flowing richly. Regardless of how deserved this reputation is, the sport itself, when you have stopped looking at everything that surrounds it and sit down to watch a race, is … often a complete snooze fest.

A Formula 1 race is started by the driver who proved himself to be the fastest during the qualification session a day earlier, followed by the second fastest car and so on.

The result is that the line-up on the starting grid is a pretty good predictor of not just who is going to win, but in which order the drivers will finish. Formula 1 races are often little more than glorified processionals.

It is true that the starting grid does not always predict the results. During the race, drivers will meet with accidents and mechanical problems that may throw them back a few places or even remove them from the race; teams that are good at qualifying, which requires being very fast for just a few laps, don’t always manage to bring that same performance for an entire race (‘race pace’); cars are required to pit at least once, which allows for undercuts and overcuts; and there are a thousand other small ways a race can be won or lost–and the fans know what these are.

That makes Formula 1 a (somewhat) enjoyable sport for the initiated. If you know what you are watching, if you can recognise all the tell-tale signs that something special is going on, if you know the ramifications of details as they unfold in front of you. But that also means that in order to get to like Formula 1, you must already be heavily invested in it. And most people start the other way around; they learn about a thing because they like the thing.

Formula 1 has taken to Youtube to remedy this is as good as they can. In good essasying tradition almost, they will extensively show you before a race what is going to happen, they will show you the race as it is happening, and then afterwards they will explain to you what you have seen.

Over the course of the two weeks between races, you can expect to see the following:

  • Five Shocking Moments – looking back at this race in previous years.
  • Circuit Guide – one of the current crop of drivers explains how they approach the track.
  • Drivers Press Conference – 5 drivers answer questions from the press.
  • Highlights from the 3 practice sessions and from the qualification session, one video each.
  • Paddock Pass – Will Buxton explaining the challenges for each team and interviewing a shed load of drivers.
  • F1 Live: the half hour run up to the race broadcast live.

After the race, Formula 1 will publish a video of race highlights and then the recurring features return:

  • Paddock Pass – another episode, this one post-race: reactions from the drivers.
  • Top 10 Onboards – the 10 most interesting radio messages between drivers and their teams.
  • Jolyon Palmer’s Analysis – a former Formula 1 driver dives deep on some of the things that made the race interesting, reviewing video footage.

And then there are videos that aren’t tied to any specific race, but that do work well in explaining how the sport works. In the past month or so we had:

  • 2019 Drivers’ First F1 Wins – what was the first win of the current crop of drivers?
  • Esteban Ocon’s Journey to F1 and Back – Ocon is a former F1 driver who will return next season.
  • How do F1 Drivers Explain F1?
  • Top 10 Cheeky F1 Innovations – innovations that were eventually banned.
  • Grill the Grid – two drivers of the same team quizzed about F1’s past.
  • 2021 F1 Car First Look – the regulations are ever changing and the car designs follow.

(I cannot embed these videos here, so I have linked to some of them above.)

All these features make it so you can get initiated in the sport in your own tempo, which makes it easier to enjoy the sport even if some of the races are, on the surface at least, boring.

Freelance.nl is bijna exclusief voor tussenpersonen (Dutch)

Ik ben een freelance webdeveloper. Dat wil zeggen dat ik als eenpitter en niet op basis van loondienst voor mijn beroep aan websites werk.

Het grootste deel van mijn opdrachtgevers vindt mij zelfstandig of via mijn netwerk. Ik heb echter ook een account op freelance.nl, de grootste marktplaats in Nederland voor freelancers (althans, dat was het in 2015, toen ik dat voor het laatst gemeten heb).

Eind jaren 2000 heb ik gemeten hoeveel opdrachten op freelance.nl door tussenpersonen/recruiters waren geplaatst, en hoeveel door echte klanten. Die meting heb ik herhaald in 2015 en zojuist nog eens (dus in 2019).

De verhouding klanten/recruiters was in:

ca. 2008 – 3:2

2015 – 4:5

2019 – 1:20

Hierbij mijn meting van vandaag van opdrachten voor klanten:

[schermafdruk: Geen intermediairs matchen - 531 matches]

en opdrachten via recruiters (het totaal is inclusief opdrachten voor klanten):

[schermafdruk: wel intermediairs matchen - 23 matches]

Bij dit soort metingen en vergelijkingen hoort een vrachtlading aan kanttekeningen.

Freelance.nl is niet alleen een van de grootste, maar ook een van de oudste nog bestaande online marktplaatsen voor freelancers in Nederland. De site werkt er voortdurend aan zichzelf te verbeteren, maar een resultaat daarvan is ook dat het lastig is om metingen uit 2009 te vergelijken met metingen uit 2019.

De site had bijvoorbeeld ten tijde van mijn meting uit 2015 nog een categorie webdevelopment, tegenwoordig is dat ICT, wat potentieel een veel wijder net is.

Daarnaast kan het best zijn dat de verhouding klanten:recruiters voor bloemschikkers er veel gezonder uitziet.

En zo zijn er nog veel meer redenen aan te voeren waarom deze metingen lastig zijn te vergelijken. Ik ben echter geen wetenschapper, maar een ondernemer, en soms werk je dan met de getallen die je hebt, niet met de getallen die je zou moeten hebben.

Voor mij persoonlijk is deze verhouding relevant. Ik heb nooit via tussenpersonen gewerkt – het zou te ver gaan om uit te leggen waarom, maar heel in het kort komt het er op neer dat perverse prikkels ervoor zorgen dat er bij opdrachten via tussenpersonen enorm veel ruis op de lijn zit, sterker, dat je vaak niet zeker weet of er wel van een opdracht sprake is – en dus maakt het nogal verschil uit of een site voor 70% uit echte klussen bestaat of voor 95% uit klussen waarvan je nog maar moet zien of het wat is.

Er zou nog een verzachtende omstandigheid kunnen zijn als het aantal opdrachten voor webbouwwerk hetzelfde was gebleven in absolute zin, maar dat lijkt niet het geval te zijn. Over ruwweg dezelfde periode gemeten (einde zomer) is het aantal opdrachten in 2019 een kwart van wat het in 2015 was.

Het kan zijn dat ik mijn mening over tussenpersonen moet bijstellen, maar waarschijnlijker is dat freelance.nl een minder opvallend puntje op mijn radar gaat worden.

Update 18 september 2019

Toen ik op een van die zeldzame opdrachten-voor-klanten wilde reageren, viel me de voorbeeldtekst van het reactieveld op:

“Beste recruiter, ik ben de beste kandidaat, omdat…”

Dat is toch echt tussenpersonentaal. Echte opdrachtgevers en echte opdrachtnemers noemen elkaar niet zo. Dus ongeacht de werkelijke situatie (die, zoals gezegd, lastig te meten en te vergelijken is), is freelance.nl blijkbaar een site die zich aan de opdrachtverlende kant voornamelijk als een site voor tussenpersonen ziet.

In English, in short: a popular website that I used to try and find work as a freelancer, has recently seen a large shift from mostly posting work by actual clients to largely posting work by recruiters. Since, in my experience, postings by recruiters rarely represent actual work, this makes the aforementioned website less useful to me.

Possibly crooked judge gets taken off case about definitely bad doctor

The court of The Hague is perhaps not known as the most even-handed in the world. This is the court where large, foreign media conglomerates shop for copyright jurisprudence. This is also the court that committed a crime in 2014 when it advertised for fresh judges, saying that women needed not apply. That was a clear case of discrimination based on gender, although I doubt anyone served even a day’s worth of gaol time for this.

So when this court dismisses a judge for being biased, that probably means something.

In an appeal in a case between Google and a doctor who had mistreated a patient, a judge was dismissed by the court over a possible conflict of interest, Emerce reported today. The plastic surgeon that this was about had been included on a blacklist, Zwarte Lijst Artsen, that bases its information on another, more opaque blacklist called BIG Register.

The people who run Zwarte Lijst Artsen run a companion blacklist on judges called Zwarte Lijst Rechters, which mainly focusses on judges who have helped absolve doctors from malpractice cases. As it happens, the judge from the initial court case, which was won by the doctor, was on this blacklist, so naturally Google appealed.

When it turned out that a judge in the appeal case also was on that blacklist, the court was unimpressed and unamused, and dismissed her.

At the time of the intial case, legal blogger Arnoud Engelfriet opined that the verdict was as expected and unremarkable: “Considering these facts, the verdict does not surprise me. I also would not call it trail-blazing.”

Engelfriets reasoning (refered to above by ‘these facts’) is a little bit hard to follow, so I won’t go into that here. Suffice it to say that if the BIG Register is so hard for average patients to find and peruse that judges see no reason to shut it down, and entries on another blacklist that is apparently transparent and usable are made hard to find, the court is basically saying that blacklists are de facto only allowed if they are unusable. And in my view that is not a fair weighing between the privacy rights of doctors and the rights of patients, and a neglect of one’s judicial duty.

The judge in the appeal case gave as an argument as to why she wasn’t influenced by the fact that she was on a blacklist herself, that the blacklist for judges wasn’t as impactful as the one for doctors. The court felt that argument irrelevant: “[This is not about] the possibility of a subjective impartiality, but about the objectively justified fear for impartiality”.

In other words, the court wasn’t so much worried that the judge might have a conflict of interest as it was that one of the parties would have the feeling that they were not being treated fairly.

The court will now have to appoint a new judge and then the saga of the plastic surgeon and her pals, the possibly crooked judges, can continue.

Test: scaling images up

I was playing around with scaling up images in The GIMP and stumbled upon a method (scale to larger than you need, then scale down to the desired result) that seemed to get exceptionally good results.

I wanted to find out if this was a fluke, so I ran some tests.

My conclusion appears to be either that playing around to find the right method is exactly what you need, or that more tests are needed.

Scaling images up means that if you have an image of a certain size (w × h pixels), you produce a version of that image that is larger (e.g. 2w × 2h pixels).

Unlike what Hollywood shows like to pretend, this does not lead to images of an equal aesthetic. Upscaling an image generally leads to ugliness, so it is your task to find the method that works best. If you have access to a larger original of the image you are about to scale up, it is almost always better to work from that original image.

Upscaling works by inventing new pixels. The algorithm must take guesses as to what such a new pixel would look like. Typically this works by using neighbouring pixels as hints at least somewhere in the process.

Illustration: how do you scale a 2 pixel wide image to a 3 pixel wide one? You could choose to only copy pixels, meaning that the ratio between the 2 halves of the image will become skewed, or you could choose to mix pixels, meaning there will be colours in the image that weren’t there before.

In the following, your browser may itself scale images up or down to make them fit the available space. I chose widths to scale to that should work fine with the current settings of my blog, but you may have to view the images separately to get a real impression of what they look like.

I started this test with two images:

– The source image, 300 pixels wide.

– The comparison image, 600 pixels wide.

Both images were produced by scaling down (method: cubic) from an approximately 1600 pixel-wide original.

The 300 pixel version would be the source of all the upscale tests, the 600 pixel version would serve as the control—as the ideal target.

All tests were performed with The GIMP.

The GIMP has traditionally had three scaling settings: none, linear and cubic.

‘None’ will try and fit pixels into new pixels, duplicating and discarding pixels where necessary. The result will look blocky regardless of whether you are scaling up and down. In my experience, the best use case for ‘none’ is when you are scaling up or down to exact halves, quarters, eights or doubles, quadruples, octuples et cetera.

‘Linear’ and ‘cubic’ are siblings, they mix pixels where necessary, with cubic doing this the strongest. Cubic is brilliant for scaling down.

I used two target widths: 400 pixels and 600 pixels.

(There is no 400 pixel control image, but I trust the 600 pixel image will suffice here.)

I applied the following tests:

none: scale up to the target width using scaling algorithm ‘none’.

lin: scale up to the target width using scaling algorithm ‘linear’.

cub: scale up to the target width using scaling algorithm ‘cubic’.

none + cub: scale up to more than the target width using scaling algorithm ‘none’, then scale down to the target width using scaling algorithm ‘cubic’.

Scaled to 400 pixels wide (factor 1.3)

Scaled to 400 pixels wide using ‘none’:

Scaled to 400 pixels wide using ‘linear’:

Scaled to 400 pixels wide using ‘cubic’:

Scaled to 400 pixels wide by scaling up to 600 pixels wide using ‘none’, then scaling down to 400 pixels wide using ‘cubic’:

Scaled to 600 pixels wide (factor 2)

Scaled to 600 pixels wide using ‘none’:

Scaled to 600 pixels wide using ‘linear’:

Scaled to 600 pixels wide using ‘cubic’:

Scaled to 600 pixels wide by scaling up to 900 pixels wide using ‘none’, then scaling down to 600 pixels wide using ‘cubic’:

My hope had been that the latter would provide the best upscaled images, but to be honest, I do not see much difference between scaling up with the linear setting and the method where you first scale up and over using none, then scale down using cubic. In fact, having done some pixel peeping I think that I prefer—for this test at least—the images scaled up using the Linear algorithm.

(Show here the difference between a linearly upscaled image and an image scaled up using the scale-over-then-down method.)

All images were saved at JPEG quality level 82, for no other reason than that is my default setting.

The difference between a cheapo ‘netbook’ and a high-end laptop is…

… about 450 gigabytes in storage.

[two screenshots]

I was looking for a cheap, small form-factor laptop on a comparison site that lists thousands of them and I found plenty of cheap ones.

When I made the two screenshots above, I had only selected a screen size, and I had sorted the results by price. The left side of the illustration shows Chromebooks and such, with storage between 16 and 64 gigabytes and prices around 150 euros. The thing I changed to get the results on the right (prices around 1,000 euros) was to set the minimum storage to 500 GB.

When I indicated I needed more than just a handful of bytes of storage, the prices sky-rocketed.

Now I know there are more differences than just storage between these two categories, but I don’t need a better screen or a faster processor to watch some videos, write e-mails and read blog posts. Storage would be nice though.

I guess if you want a cheap, small laptop with a decent amount of storage, you will have to swap out the SSD yourself.

Drupal 7 module integration testing

The information below is available in other places, but I figured I would bring it together as a sort of note-to-self, because this had me stumped for a day.

Suppose you have a Drupal contrib module A that you are writing tests for.

Contrib module B is optionally extending the functionality of module A, and you want to test the integration of these two modules.

Drupal has tons of modules that either are built specifically to extend the functionality of another module, or that are built as some sort of low-level API module that lets you do cooler stuff with the functionality you already have. For example, the Feeds module is optionally extended by Feeds Tamper, Feeds Import Preview, Feeds Entity Processor and so on.

So let us say that somewhere in your code for module A you have the following:

if (function_exists('module_b_api_function')) {
  // Do cool stuff with module_b_api_function().

Now in order for your test runner to call this function, you need to tell it about the existence of the dependent module. It needs to be able to activate the module or to fail gracefully if it cannot (for instance if the second module does not exist).

The following assumes you are acquainted with the Drupal 7 built-in testing system (Simpletest), specifically with the DrupalWebTestCase.

There are three places where you define the integrated module you want to load: your module’s .info file and in the the setUp() and getInfo() methods of the .test file for your integration test.

1. In your module’s .info file, add:

test_dependencies[] = module_b

(where ‘module_b’ should be replaced by the name of the module that provides the extended functionality).

This alerts the drupal.org test runner that it needs to add module B to its code base.

1.b. If you are patching an existing contrib module, you may wish to create a separate patch of just this change first, because the drupal.org test runner needs to know about your test_dependency before it starts running the actual integration test patch.

Update 20 August 2019: this preliminary patch must be committed to the module’s development branch by the module’s maintainer. This is something that at first completely flew by me, because all of the documentation on this particular test runner quirk was written by module maintainers, and when they write ‘commit’, it means something else than when I read ‘commit’—in my mind, committing refers to my local repository. Yeah, I know…

2. Your setUp() method might look like this:

public function setUp(array $modules = array()) {
  $modules[] = 'module_b';


This enables the modules during the test run.

3. Your getInfo() might look like this:

public static function getInfo() {
  return array(
    'name' => 'Module B integration tests',
    'description' => 'Test integration with the B module.',
    'group' => 'module_a_group',
    'dependencies' => array('module_b'),

The Simpletest uses the return value amongst others to check (in simpletest_test_get_all()) that the modules the test depends on, are discoverable. If they are not, your module integration test is skipped.

If you leave out this third step, the testing system will halt with a fairly useless error message upon discovering it cannot load the module. That is OK for you, now that you know what is happening, but not for others who might never have seen your tests and are just seeing their own test runs fail. Having your test skipped in case of a missing module is nicer.

Google’s featured snippets algorithm is quite smart, at times

As a I was trawling through the statistics for this site, I noticed how popular my post about creating rounded corners with The Gimp is.

I mean, I knew it was popular, it is the post with by far the most comments on this blog, but I did not know it outperformed other posts by an order of magnitude.

And one reason, it turns out, is that Google heaps mountains of love on that post for search phrases like “gimp rounded corners”. In fact, my post is presented as a featured snippet for that phrase (without the quotation marks), meaning that not only is it returned as the first search result, but it also receives a special presentation that makes it really stand out (see illustration).

[screenshot of Google search results]

But what I find most remarkable is how Google’s algorithms managed to extract a numbered list from the way I did it. Not so much that they translated “Step 1” to just “1”, but that they realised that labelling things “step” is one of a myriad of ways one can present a list.

[screenshot of my blogpost]

Also note how the algorithm automatically skipped my intro. (That is not just smart, that is wise.)

A customer once asked me if I could make a tool that would take a person’s resumé, regardless of how they had formatted it, and extract all relevant information from it in a structured manner. I said yes, I can do this, it is pretty much what I have trained to do in university, and an optimistic estimate for my time would be about 18 months.

This upset the customer no end, because they thought a week and a half, tops, was what was needed.

I hope you see the similarities between the two projects, and assuming I know my stuff, a lot of work must have gone into the Featured Snippets project just to make it feel like it works most of the time. Things can still go wrong with search snippets (says a Google blog post from January this year). People will inject their tutorials and explanations with their own personality and opinions, which on the whole must be a good thing, except that sometimes these personalities and opinions are of a rather unsavoury type. “How to torture puppies” is not a tutorial healthy people would like to see boosted by Google.

Google must have seen their Featured Snippets feature as just one way of organising search results in a manner that is most helpful to its users, but a ‘naive’ visitor may see Google giving certain results a pedestal, and that is a valid interpretation. (Certainly the SEO world appears to be eyeing search snippets with a greed that belies the mere interpretation of Featured Snippets as ‘superior presentation’.)

State of the CMS in 2018

After predicting in 2004 (without naming names) that tools like Wordpress, Drupal and Joomla might become popular CMSes … they did! In 2010 and 2014 I followed up with articles exploring which of these tools had become popular and how they described themselves over time respectively.

Re-reading these articles makes me realise how quaint the premise must seem to a modern audience. It is as if I predicted horses would be called horses in the future. What is so special about predicting the obvious? But even though in 2018 these tools appear to be the very definition of CMS-es, in 2006 they weren’t. If you Googled for CMS-es 12 years ago, you would get completely different names (none of which I remember as they sank into obscurity over time).

In those days you had, apart from ‘real’ CMS-es also forum software, blogging software, so-called ‘nukes’ (community software) and so on. Wordpress and Drupal were blogging tools back then, and Joomla a nuke.

So let us see what has changed.

2010: Semantic personal publishing platform
2014: Web software you can use to create a beautiful website or blog
2018: Open source software you can use to create a beautiful website, blog, or app

2010: Open source content management system
2014: Open source content management platform
2018: Open source content management system

2010: Dynamic portal engine and content management system
2014: Content management system
2018: Content management system

As you can see, barely anything has changed the last four years, the owners and developers see their tools as conceptually the same (even though the web has changed a lot in that time).

Something else that hasn’t changed much is popularity. Wordpress, Drupal and Joomla are still the outright leaders, and like four years ago, Wordpress still dominates with a market share of about 60 %. What has changed is that both Drupal and Joomla have shrunk, they were the largest of the small CMS-es in 2014 and are even smaller now.

Developments that I find alarming, but pretty much the entire industry seems excited about, are the introductions of headless (or decoupled) CMS-es and of services. In this future, a website is a container that collects and presents data from several sources through standardised APIs, with the CMS being one of those sources. The web itself becomes an app delivery platform in this scenario, and the choice for a CMS becomes less about “what do I want this site to look like?” and more about “how do I want content to be managed?” The word ‘app’ did not appear by accident in Wordpress’ 2018 tagline.

I have not been able to find any evidence of significantly popular commercial CMS-es in 2018. Which probably means that the ones that exist only serve the high-end market.

Update 5 January 2018: Wordpress has released version 5 of its system which includes the so-called Gutenberg editor, one of the largest changes in the CMS’ history. Why risk alienating your entire user-base by introducing a costly disruption nobody asked for and nobody needs? Because at the bottom of the market, hosted CMSes are busy nibbling at Wordpress’ feet. And even though these CMSes still only make up a tiny part of the market (too small for me to mention half a year ago), this development has got Mullenweg and pals scared. So yeah, speaking of developments, hosted CMSes like Wix and Squarespace might pop up on my next report, 3.5 years from now.

American websites improved due to European privacy laws

An interesting side-effect to the introduction of the GDPR, the latest EU privacy law, was that (for Europeans at least) several American websites improved.

Instead of a dazzling and confusing cornucopia of banners and clickables, the sites of USA Today and NPR refocused on their stated goal, i.e. journalism.

See here for two examples:


Would you not much rather read the European versions of these sites than the American ones?

The one site that seems confused is Google:

This seems like a link to an article in the LA Times about that same publication suing the city of Los Angeles, but if I click that link, I get a message saying “our website is currently unavailable in most European countries.”

The LA Times has chosen that rather than making a version of its website that does not heavily infringe upon the privacy of its visitors, it will simply show nothing to Europeans.

This is the same Google that for some bizarre reason wants to fine-tune every aspect of my ‘search experience’, to the point that my search results are never the same as anybody else’s results for the same search phrase. Yet they are unable to filter out websites that refuse to show me relevant content.