An anecdotal look at Facebook page reach

Here is one for the books.

This is a graph of the so-called ‘reach’ that my roller derby photography page has on Facebook. Reach means: how many people have been exposed to my photos. (In an earlier blog post I explained why I have a Facebook page in the first place.)

Every dot represents the reach of a post in which I introduce a new photo album to this page.

There are two things that stand out from this graph, both which are remarkable for reasons I will explain below.

The first thing you will notice is the up-and-down nature of the graph. One time I reach a lot of people and the other barely any at all, but — and here comes the second anomaly — before the autumn of 2017, “barely any at all” still meant more than 2,000 people reached. Since late 2017, reach has dropped into the hundreds.

This is strange because of the way I work. I visit roller derby games on the weekend, prepare an album containing photos of a game the day after and then post the album, containing a few dozen photos, to my Facebook page. Usually the players and sometimes their friends and family will look at the fresh album and that is that. After a week, nobody except a few fans of very popular players, engages with the album any more.

In other words, my status updates are limited to the same type of thing over and over again, and although the specific audience changes per game, the expected size of the audience is always the same — namely friends and family of two teams of skaters.* This should be reflected in my reach, but it isn’t.

If anything, my average reach should increase slightly because more and more people ‘like’ my page.

When my reach was still in the thousands, I wasn’t overly concerned about the up-and-down nature of it, because I was still reaching most of the people who would be interested in my photos. When it dropped into the hundreds however, I started to worry a little.

“Over the past few months, I’ve read articles and answered questions from many people who are concerned about declines in organic reach for their Facebook Pages”, wrote one Facebook employee in 2014 in an article titled Organic Reach on Facebook: Your Questions Answered. Let’s see what he had to say.

There are basically three reasons why reach would drop over the course of time. The first is that more and more updates are being shared each day, the second is that people ‘like’ more pages than they did before and the third is that Facebook won’t show everything.

In other words, whenever I share a photo album, Facebook shows it to less and less people over the course of time, and the more people like my page, the fewer people get to see its fruits.

Facebook does this, it claims, so that it can keep people engaged. If people have too many things of little value to look at, they will get bored. So Facebook prefers to present people with content of high value.

And then he bowls that entire edifice over by saying that companies can buy views for their pages. So much for putting engaging, valuable content first.

I had an interesting experience last month. The online presence of roller derby in the Netherlands is largely concentrated on Facebook. Games are announced using Facebook event pages. After a game, I share links to my photo albums there, because not every skater is a fan of my page, but they may still be interested in photos of that particular event.

On this occasion, somebody posted a comment to my post on the event page. Usually this takes the form of “thank you” or “nice photos” but I like to check, in case somebody wants a photo removed or says something untoward. In this case, though, I could not view the comment because I could not view the post because Facebook had decided (I assume) that my post was not engaging enough for me.

I could see I had posted that link, because Facebook was still showing it among the three posts in the preview of its Discussion tab (event pages are divided into an About and a Discussion tab). And I could also see that somebody had commented, because Facebook notifies you of new comments. The site however just refused to show me either my post or the comment.

So that was an interesting bit of automated gaslighting. Smarter systems, designed to counter trolls, hide postings from other readers but not from the author, Facebook seemingly does it the other way around.

International ad agency Ogilvy (disclosure: I worked for them in a previous life) wrote a white paper in 2014 in which they outline the everlasting decline of Facebook page reach. Their recommendations are that 1) you focus on sub-sets of your audience so that you can better supply them with engaging stories rather than going for a one size fits all, and 2) that you return to Platform Neutral, e.g. your own website. If you want to control the discussion, you have to control the platform.

I am not sure that is such a good advice, because Google Search is a platform too now (it wasn’t, or not as much, in 2014) and is capturing a lot of visitors before they can reach your site. Also, in the case of the amateur event photographer, Facebook may simply be where your audience is, and you don’t get to move them around.

*) Full disclosure: most events I photograph are so-called double headers, in which two roller derby games are played back-to-back. That means that in those cases my audience actually consists of the players, friends and family of four teams. However, that would have side-tracked you into contemplating the nature of roller derby events in a way that is completely irrelevant for this post, hence the condensation of the situation into a form that is easier to understand.

Facebook Location Spam

facebook-location-spam

If you check in at a location on Facebook or enter the location for a photo, there is a chance that you will end up linking to spam.

The main reason for this is that Facebook is crap and the people who make Facebook are idiots, but I say this in anger after hacking spam out of my photo albums for 2 hours straight, so I will acknowledge that this is perhaps not the most constructive of explanations. Let me elucidate.

When you try and enter a location in Facebook, the site helpfully offers you a number of suggestions based on the part of the location name you have entered so far. This is not an exhaustive list, i.e. Facebook makes a selection of locations it is going to suggest. If the name of the location is not in the list, you get the option to ‘Just use’ the name you just entered.

In some of Facebooks forms, you get the option to Add Place. This takes you to a new form in which you can enter some information about the place you just added, including its address. Facebook does not remember what you added last time, so if you have to fix hundreds of photos, you have to fill out thousands of fields (hence me just wasting two hours).

But suppose you are a spamming low-life piece of scum (watch your contaminations, Branko!) and you have somehow managed to automate part of this process, you now have found yourself a way to storm the top of the list of location suggestions. At least, that is how I assume this works. It would make little sense for Facebook to suggest obscure locations, so I assume they automatically suggest popular locations, opening them up to attacks by spammers who have the time, the energy and the tools to game this system.

Presumably, the more people like and check in at these scam locations, the more popular these false locations get.

The screenshot illustrates how I have started typing ‘Sporthal’ – Dutch for sports venue – and as you see, Facebook suggests 8 locations. Of those, 3 have been hijacked by spammers, all of which show up in the top 4 (you can tell by the fact they share the same logo).

I have no idea how these scammers manage to hijack locations so completely. They take over both the profile photo and the cover photo and manage to be the only ones to have posting rights. The cover photo seems to be something that a person can suggest for a location, but the other two items aren’t.

I know of at least one location (Sporthal Oranjeplein in The Hague) where there was a somewhat well used, somewhat maintained real location page that was then ‘merged’ with the spam location. Meaning, if you somehow managed to find a link to the original location page and clicked it, Facebook would automatically redirect you to the spam page. In those cases Facebook will helpfully tell you it has merged pages and offer you a way to report an incorrect merge.

This is also useful in cases where locations have been merged with automatically created pages – case in point, links in photo albums leading to Utrecht Disaster (a roller skating hall) now all lead to an auto-generated page about the Heysel Stadium disaster. You can report the mismerge – as useful as pressing a pedestrian crossing call button I imagine.

So what is the problem? Is there a problem? I mean, I hate spammers and all that, but in the end it is my choice to add a location to my photos, and it is my fault if I don’t properly look at the location I add.

The mismerges are problematic in this respect, because I could link to a proper location only to find out years later that the link is now redirecting to spam.

I also imagine that if locations can be hijacked by spammers, they can be hijacked by phishers and other criminals with more insidious designs.

I don’t know of a way to fix this. Facebook does not want to hire people to add and manage locations, so this is always going to be a problem. It could disable locations altogether, but having people share where they have been and what they have done together, happens to be one of its most attractive qualities. Adding the ability to report spam, assuming Facebook would actually follow up on such reports, might help, but I can think of several drawbacks. For one, Facebook (and similar social media services) is known for selectively listening to its users. Why would I report something if I believe they wont listen anyway. The other problem is that this turns the whole battle over locations in one between two powerful factions (Facebook on the one hand, spammers on the other) in which the regular user is less and less likely to be heard.

Facebook’s problem is a conceptual one. It wants locations to be somewhat community managed, but ignores the fact that the community contains many bad actors.

There is a very simple thing they could have done for my specific problem, though. As I am typing the name of the venue where I have taken my photos, progressively less and less suggestions appear. This makes sense in a world where there is only one location called Sporthal Oranjeplein (staying with my previous example), but Facebook knows of several. Would it be too confusing to show more than one?

Design pattern: event calendar (focussing on WordPress)

Event calendars tell users about interesting events that are about to happen. They can also help create an impression of how busy the near future will be. Furthermore, calendars may double as a navigation or filter tool.

Events as blog posts in WordPress

I’ve helped build a number of event calendars for websites in the past, especially for websites based on the WordPress-CMS. For small businesses and organisations who mainly need a website for informational purposes, WordPress is a powerful choice because it is cheap, easy to install, easy to maintain and well supported.

A basic WordPress-based website shows information as a series of blog post abstracts on its homepage, the most recent one at the top and posts getting progressively older as the visitor scrolls down the web page.

A simple way to draw attention to events is to display them as blog posts. WordPress started out as a blogging platform so it’s well suited for this purpose. There are a number of problems with this approach:

  • Events don’t necessarily mix well with regular blog posts or news items.
  • Regular blog posts are best sorted by publication date, events are best sorted by event date.
  • If you wrote about an event early on, it would get pushed off the screen by more recent posts.

In short, people would have to start hunting for your events or your news or both. For that reason it is best if events and blog posts are separated. This is where event calendars come in.

Luckily WordPress offers a lot of plugins for event calendars. Searching for these plugins in the WordPress plugin directory yielded the following number of hits per search phrase: events (1,001), event calendar (314), event list (841) and so on.

Grid type event calendars

If you look at the screenshots from the top results for each search, you will see that most of the event calendars are displayed as classical calendars, that is to say a matrix in which each column presents a weekday and each row a week.

event-wordpress-plugins

Read the rest of this entry »

Ads for something you’ve already bought

Lately this happens a lot to me:
1) I search the web for a product.
2) I settle on product X.
3) The ad network remembers my choice.
4) I buy product X.
5) The next two weeks, the web inundates me with ads for product X, even though I have already been sated with said product.

In other words, I keep seeing ads on the web for products I’ve already either bought or rejected.

The mechanism behind this is called targeted advertising. Basically you visit website A which tells ad network Annoy Inc. what you’ve been looking at, then you visit website B which loads ads by Annoy Inc. based on what they know about your interests.

Apparently I am a little bit behind the curve, because this sort of thing was already happening in 2012. The Slate article calls the practice creepy and focusses on the fact that the advertisements follow you around without actually serving a purpose. I’d probably use a less strong word and call it strange rather than creepy, but then I don’t need to draw in many readers in order to serve them targeted ads, like Slate does.

It seems to be that advertising has become smart enough to realise what you are interested in at any given point, but not smart enough to realise when that interest drops abruptly or changes in nature. The funny thing is that advertising for something that you are no longer interested in is actually worse than advertising for something you have never been interested in. It’s a bit like the one night stand from two weeks ago showing up at work five times a day to nag you about wanting to do the sex thing again – well, at least they have a chance you will say yes.

Why are companies so stupid? I think part of the problem may be that ad networks really don’t have an incentive to change things. They get paid by the view and can in fact prove that you’ve shown interest in the product that’s being advertised. If manufacturers and sellers want to stop annoying their core customer base, maybe they should get involved more into on-line advertising. (Or maybe the companies really aren’t that stupid and get something out of it that the consumers have yet to suss out.)

See also:

Default browser cookie settings in 2014

(TL/DR? Skip to results.)

Yesterday I wrote that even though social networks currently combine targeted advertising and private user data collection, doing them both is not a requirement for running a profitable social network. The networks can just focus on the former, that is focus on the harvesting and selling of user data, and dispose of the advertising part altogether.

Having the social network and the ad network on the same domain (for example facebook.com) does make things slightly easier for the social network operator, because users may have switched off so-called third party cookies which are stored and read from a different domain (for example doubleclick.com).

The reason why the average user would block third-party cookies is because these cookies are almost exclusively abused for tracking users behind their backs.

How much of a problem is it to advertisers that users block third-party cookies? Not much. Users are typically reluctant to tinker with browser settings, therefore it depends on the web browser makers and the sensible defaults they choose whether an aspiring social network can plant cookies that another domain may read.

I decided to look into the defaults of modern web browsers, but could not find much information.

Here are some data points:

That leaves some browsers unexplored. Since checking the browsers on my computer was probably going to be easier than Googling anyway, I decided to take that route.

Table: default cookie settings for some web browsers in 2014.
Browser + version Operating system Default cookie setting
Google Chrome 37 Microsoft Windows Allow (all?) cookies
Microsoft Internet Explorer 11 Microsoft Windows Allow some third-party cookies
Mozilla Firefox 32 Microsoft Windows Allow third-party cookies
Apple Safari Apple iOS 7 Allow local cookies?
Android browser Google Android 4.0 Allow (all?) cookies?

As you can see the answers are ambiguous at times and don’t square with the results I linked to, but it would appear that currently most web browser will let sites track you across domains using third-party cookies.

A note about methodology. This was a quick study to find out what the default cookie settings are. For that, I needed to restore browser defaults and that was not always possible. The mobile devices (iOS and Android) had no way to restore settings to a default so I had to assume that these were the default settings.

I do tinker with my desktop browsers but I rarely do so with my mobile devices, so it’s a reasonable guess that the aforementioned settings are the default ones, I just cannot be absolutely sure.

Another problem was that browser manufacturers use different settings, different terminology and sometimes translations which can make it hard to find out which is which.

Most browsers speak of ‘allowing’ cookies, iOS Safari speaks of blocking them.

The reason I report Chrome’s default as “allow (all?) cookies” rather than “allow all cookies” is because I don’t know if “indirecte cookies” is their Dutch translation of “third-party cookies”. If it is, you can remove the question mark and conclude that Chrome allows all cookies by default.

Internet Explorer has a return-to-default button just for privacy settings, which is much appreciated, and a number of sensible settings collections. Unfortunately the explanation of what these settings mean is rather opaque. For instance I don’t know what are “cookies that can be used to contact you”.

Firefox’ default is also a ‘sensible’ setting which tells you only in the most general terms what it does, namely that the browser “will remember your browsing, download, form and search history, and keep cookies from websites you visit”.

You can choose to use custom settings and if the defaults for these settings can be assumed to be the same as the ‘sensible’ settings, then their third-party policy is clear if perhaps not sensible: “Accept third-party cookies? Always.”

Safari lets you choose to block cookies: “Always”, “From third parties and advertisers” and “Never”. I assume “and advertisers” is not a separate category from “third parties” and was just inserted to make it clear that these are tracking cookies, but again, that’s just an assumption.

The Android Browser’s setting is the least complicated of all, you can choose Cookies or No cookies, and if you choose the latter I assume most of the useful services on the web become off limits to you. But are there really people who bank online using their smart phone and an operating system made by Google?

If browsers all blocked third-party cookies, you still wouldn’t be safe though. For one thing, what we generally understand as cookies, small bits of data that are written and read using two standard Javascript functions, only make up a small part of all the different types of tracking technologies there are.

Dealing with the Dutch cookie law as a web developer

This note about how to comply with the Dutch cookie law is mostly a memo to self, but I believe the information past the fold is also useful to anyone who runs their own website and needs to ensure the privacy of their site’s visitors.

Read the rest of this entry »

Notes from the Responsive Design trenches

Lately a lot of companies have been asking for websites built along the principles of ‘Responsive Design’. I had to give up on building a responsive website in early 2012 due to lack of time, but in January 2013 I got another chance. (Side-note: both websites are on intranets, so I cannot show them to you.)

Responsive Design is designing a website in such a way that it rearranges itself to look good both on large screens (typically desktop-PCs) and small screens (typically mobile phones).

The text below is first and foremost a memo to self, but it can also be used as an addition to the ultimate Responsive Design primer, the A List Apart article by Ethan Marcotte that started it all. I will explain Responsive Design in a bit more detail below, but if you really want to know what it is about I suggest you read the A List Apart piece.

Although Responsive Design is pretty straightforward to anybody who has done even the most trivial things with Cascading Style Sheets, it is typically used in a wider context that can make things complicated. Hence the need for this intermediate level article.

Read the rest of this entry »

The LinkedIn endorsement system

LinkedIn has introduced an endorsement system which lets you ‘endorse’ the skills of your connections.

A few quick notes about this:

  • I haven’t checked whether these are skills you entered yourself; that seems to be the case though.
  • I have endorsed wide, easy skills, such as mastering your native tongue.
  • I have endorsed specialized skills that I have witnessed myself, or that are somehow at the core of that connection’s abilities.
  • I have not endorsed skills that sound like a core skill, but that to my knowledge aren’t; for example, if I know a project manager, I am not going to endorse them as change manager, even if I have seen them manage changes after the delivery of a project. Similarly, I have not endorsed interaction designers as user experience experts.
  • In other words, don’t be shy to add the simple stuff to your profile.
  • Also add skills that you know your connections know you possess.
  • So far I have been honest and have only endorsed skills that I knew people possessed.
    I expect some people will just endorse all the skills of their friends or connections.
  • With this system Recommendations are probably going to be more rather than less important, considering my previous note.

Advantages and disadvantages of custom CMSes versus off-the-shelf CMSes

About 95% of my income as a front-end web developer comes from large ad and web agencies that hire me to be a part of project teams. These teams build websites that cost anywhere from 10,000 to 1,000,000 euro.*

The other 5% is from small jobs, and the smallest of those are when other freelancers hire me to update their websites. About ten years ago I wrote a couple of custom content management systems (CMSes) for some of these small customers, because A) that is the sort of thing fledgling web developers do, and B) at the time there weren’t really any good off-the-shelf products I could use.

Lately I have been trying to tempt these customers to switch to off-the-shelf** products like Drupal or WordPress because every time I have to update their custom websites I basically have to learn to understand my own code all over again. This grates.

I have found my customers to be remarkably resistant to temptation, however. The best part? I cannot really blame them.

The main reason my customers resist the switch is simply one of cost. If their sites were simple affairs that consisted of a bunch of static pages and a contact form, the switch would take somewhere between half a day and a day, and even that would be fairly costly for them, considering that any gains for them would be difficult to envision. Sure, site updates and extensions take me longer to implement, but we are talking in the region of hours here. If I have to build them a new feature every year, that might cost six hours instead of four. That means they have to envision major changes to their sites over the next five years, which is tough to do for anyone.

But we’re talking about sites that have evolved, that have acquired all kinds of neat features over the years that do not have third party alternatives in off-the-shelf products.

And who is to say that my customers will even be using that off-the-shelve CMS five years from now? Switching to an off-the-shelf product simply is not a wise investment from their perspective.

Having said that, all my customers who started with an off-the-shelf CMS are still happy users. (Their only problem being that they do not update often enough so that they need to hire me now and again to remove damage caused by hackers.)

I made the following list a couple of weeks ago to discuss this very issue with one of my small customers:

Advantages of using my custom CMS:

  • I know it well.
  • Does not require much maintenance.
  • Every conceivable extension possible.
  • Log-in system uses safety through obscurity.

Disadvantages of using my custom CMS:

  • Extensions can be pricey because sometimes I need to reinvent the wheel.
  • Log-in system becomes unsafe once discovered.***
  • Becomes difficult to extend or maintain as soon as I am no longer available.
  • If new legislation applies, implementation is expensive.****

Advantages of off-the-shelf CMSes:

  • Are regularly extended with current web technologies.
  • Security is an ongoing matter of concern.

Disadvantages of off-the-shelf CMSes:

  • Are popular targets by hackers and law enforcers alike.
  • Require continuous updating.

In short, you trade a relatively high price for new features for recurring payments for updates. If you are on a budget and you already have a website, it makes little sense to switch.

The only reason why small customers would have to switch, as far as I can tell, is when they stop being small customers.

*) I don’t think I’ve ever actually worked on a million euro website. I have been involved in million euro projects though. Usually what happens in those cases is that a company is involved in a million dollar effort to transition to the web, and in all those cases this involved multiple websites. Note that I don’t make millions, those pies tend to be divided among a great many people.

**) See also: The blog systems that made it as CMSes.

***) Not, I hasten to note, because I write unsafe systems. Rather because I am the only coder who ever looked at my system, making small flaws more likely to persist. As the saying goes, many eyeballs make excellent eyeball soup. (Hm, I may have remembered that incorrectly.)

****) The EU has just adopted a directive that outlaws most browser cookies. The advantage of a custom CMS is that it only uses the cookies it needs though.

The blog systems that made it as CMSes

Six years ago I blogged that open source CMSes tended to be too difficult to set up and use to be usable for small business and non-profits. I suggested that a number of blog systems and nukes could step forward and supplant them, and that is exactly what has happened.

In 2008, Joomla!, Wordpress and Drupal were the most popular open source CMSes, even though none of them started out as such. In 2009, these three still led the pack with a wide margin.

Wordpress and Drupal used to be blogging software, and Joomla! (formerly Mambo) was a so-called Nuke (a package for building web communities), but they quickly re-branded themselves:

  • Joomla! – dynamic portal engine and content management system
  • Wordpress – semantic personal publishing platform
  • Drupal – open source content management system

I have used all three to build professional websites with. Wordpress is well suited for small websites for small businesses, and I have built websites for huge organisations with Drupal. Indeed, for the past two years Drupal-sites have accounted for almost all of the websites I produced, the exceptions being one small Moodle project and a PHPfox website.

Installing Wordpress really takes me no more than 10 minutes or so. (In my experience, a single person business wants to spend at most a few hundred euros on a regular, fairly static website, so if I decide to go with Wordpress I can spend the remaining hours on making the site look good, which in turn helps the customer to establish their brand.)

In light of these developments it may not be too far fetched to conclude that ease of use, perceived or imaginary, can be very important for the adoption rate of an open source product. Ask the Firefox developers. It wasn’t the plug-ins or the built in search engine field or the tabs that made the difference, but the fact that the webbrowser seemed to work the way laypeople expected a webbrowser to work.

Indeed, I know plenty of people who never use tabs, who only seem to get confused by them, and plenty of others who search the web by entering a search phrase into the address bar (in the latter case a savvy web dude like myself included). The only Firefox feature I have consistently heard people name as the reason to use that browser is its perceived security. Firefox is, as the head rabbi of the world once put it, the one that “keeps out the schmutz“.