Kristian Lyngstøl's Blog

Varnish Foo - Introduction

Posted on 2015-11-24

This is the only chapter written in first person.

I've worked on Varnish since late 2008, first for Redpill Linpro, then Varnish Software, then, after a brief pause, for Redpill Linpro again. Over the years I've written code, written Varnish modules and blog posts, tried to push the boundaries of what Varnish can do, debugged or analyzed countless Varnish sites, probably held more training courses than anyone else, written training material, and helped shape the Varnish community.

Today I find myself in a position where the training material I once maintained is no longer my responsibility. But I still love writing, and there's an obvious need for documentation for Varnish.

I came up with a simple solution: I will write a book. Because I couldn't imagine that I would ever finish it if I attempted writing a whole book in one go, I decided I would publish one chapter at a time on my blog. This is the first chapter of that book.

You will find the source on This is something I am doing on my spare time, and I hope to get help from the Varnish community in the form of feedback. While the format will be that of a book, I intend to keep it alive with updates as long as I can.

I intend to cover as much Varnish-related content as possible, from administration to web development and infrastructure. And my hope is that one day, this will be good enough that it will be worth printing as more than just a leaflet.

I am writing this in my spare time, I retain full ownership of the material. For now, the material is available under a Creative Commons "CC-BY-SA-NC" license. The NC-part of that license will be removed when I feel the material has matured enough and the time is right. To clarify, the "non-commercial" clause is aimed at people wanting to sell the book or use it in commercial training (or similar) - it is not intended to prevent you from reading the material at work.

Target audience and format

This book covers a large spectre of subjects related to Varnish. It is suitable for system administrators, infrastructure architects and web developers. The first few chapters is general enough to be of interest to all, while later chapters specialize on certain aspects of Varnish usage.

Each chapter is intended to stand well on its own, but there will be some cross-references. The book focuses on best practices and good habits that will help you beyond what just a few examples or explanations will do.

Each chapter provides both theory and practical examples. Each example is tested with a recent Varnish Version where relevant, and are based on experience from real-world Varnish installations.

What is Varnish

Varnish is a web server.

Unlike most web servers, Varnish does not read content from a hard drive, or run programs that generates content from SQL databases. Varnish acquires the content from other web servers. Usually it will keep a copy of that content around in memory for a while to avoid fetching the same content multiple times, but not necessarily.

There are numerous reasons you might want Varnish:

  1. Your web server/application is a beastly nightmare where performance is measured in page views per hour - on a good day.
  2. Your content needs to be available from multiple geographically diverse locations.
  3. Your web site consists of numerous different little parts that you need to glue together in a sensible manner.
  4. Your boss bought a service subscription and now has to justify the budget post.
  5. You like Varnish.
  6. ???

Varnish is designed around two simple concepts: Give you the means to fix or work around technical challenges. And speed. Speed was largely handled very early on, and Varnish is quite simply fast. This is achieved by being, at the core, simple. The less you have to do for each request, the more requests you can handle.

The name suggests what it's all about:

From The Collaborative International Dictionary of English v.0.48 [gcide]:

  Varnish \Var"nish\, v. t. [imp. & p. p. {Varnished}; p. pr. &
     vb. n. {Varnishing}.] [Cf. F. vernir, vernisser. See
     {Varnish}, n.]
     [1913 Webster]
     1. To lay varnish on; to cover with a liquid which produces,
        when dry, a hard, glossy surface; as, to varnish a table;
        to varnish a painting.
        [1913 Webster]

     2. To cover or conceal with something that gives a fair
        appearance; to give a fair coloring to by words; to gloss
        over; to palliate; as, to varnish guilt. "Beauty doth
        varnish age." --Shak.
        [1913 Webster]

Varnish can be used to smooth over rough edges in your stack, to give a fair appearance.


The Varnish project began in 2005. The issue to be solved was that of VG, a large Norwegian news site (or alternatively a tiny international site). The first release came in 2006, and worked well for pretty much one site: In 2008, Varnish 2.0 came, which opened Varnish up to more sites, as long as they looked and behaved similar to As time progressed and more people started using Varnish, Varnish has been adapted to a large and varied set of use cases.

From the beginning, the project was administered through Redpill Linpro, with the majority of development being done by Poul-Henning Kamp through his own company and his Varnish Moral License. In 2010, Varnish Software sprung out from Redpill Linpro. Varnish Cache has always been a free software project, and while Varnish Software has been custodians of the infrastructure and large contributors of code and cash, the project is independent.

Varnish Plus was born some time during 2011, all though it didn't go by that name at the time. It was the result of somewhat conflicting interests. Varnish Software had customer obligations that required features, and the development power to implement them, but they did not necessarily align with the goals and time frames of Varnish Cache. Varnish Plus became a commercial test-bed for features that were not /yet/ in Varnish Cache for various reasons. Many of the features have since trickled into Varnish Cache proper in one way or an other (streaming, surrogate keys, and more), and some have still to make it. Some may never make it. This book will focus on Varnish Cache proper, but will reference Varnish Plus where it makes sense.

With Varnish 3.0, released in 2011, varnish modules started becoming a big thing. These are modules that are not part of the Varnish Cache code base, but are loaded at run-time to add features such as cryptographic hash functions (vmod-digest) and memcached. The number of vmods available grew quickly, but even with Varnish 4.1, the biggest issue with them were that they required source-compilation for use. That, however, is being fixed almost as I am writing this sentence.

Varnish would not be where it is today without a large number of people and businesses. Varnish Software have contributed and continues to contribute numerous tools, vmods, and core features. Poul-Henning Kamp is still the gatekeeper of Varnish Cache code, for better or worse, and does the majority of the architectural work. Over the years, there have been too many companies and individuals involved to list them all in a book, so I will leave that to the official Varnish Cache project.

Today, Varnish is used by CDNs and news papers, APIs and blogs.

More than just cache

Varnish caches content, but can do much more. In 2008, it was used to rewrite URLs, normalize HTTP headers and similar things. Today, it is used to implement paywalls (whether you like them or not), API metering, load balancing, CDNs, and more.

Varnish has a powerful configuration language, the Varnish Configuration Language (VCL). VCL isn't parsed the traditional way a configuration file is, but is translated to C code, compiled and linked into the running Varnish. From the beginning, it was possible to bypass the entire translation process and provide C code directly, which was never recommended. With Varnish modules, it's possible to write proper modules to replace the in-line C code that was used in the past.

There is also a often overlooked Varnish agent that provides a HTTP REST interface to managing Varnish. This can be used to extract metrics, review or optionally change configuration, stop and start Varnish, and more. The agent lives on, and is packaged for most distributions today. There's also a commercial administration console that builds further on the agent.

Using Varnish to gracefully handle operational issues is also common. Serving cached content past its expiry time while a web server is down, or switching to a different server, will give your users a better browsing experience. And in a worst case scenario, at least the user can be presented with a real error message instead of a refused or timed out connection.

An often overlooked feature of Varnish is Edge Side Includes. This is a means to build a single HTTP object (like a HTML page) from multiple smaller object, with different caching properties. This lets content writers provide more fine-grained caching strategies without having to be too smart about it.

Where to get help

The official varnish documentation is available both as manual pages (run man -k varnish on a machine with a properly installed Varnish package), and as Sphinx documentation found under

Varnish Software has also publish their official training material, which is called "The Varnish Book" (Not to be confused with THIS book about Varnish). This is available freely through their site at, after registration.

An often overlooked source of information for Varnish are the flow charts/dot-graphs used to document the VCL state engine. The official location for this is only found in the source code of Varnish, under doc/graphviz/. They can be generated simply, assuming you have graphviz installed:

# git clone
Cloning into 'Varnish-Cache'...
# cd Varnish-Cache/
# cd doc/graphviz/
# for a in *dot; do dot -Tpng $a > $(echo $a | sed s/.dot/.png/); done
# ls *png

Alternatively, replace -Tpng and .png with -Tsvg and .svg respectively to get vector graphics, or -Tpdf/.pdf for pdfs.

You've now made three graphs that you might as well print right now and glue to your desk if you will be working with Varnish a lot.

For convenience, the graphs from Varnish 4.1 are included. If you don't quite grasp what these tell you yet, don't be too alarmed. These graphs are provided early as they are useful to have around as reference material. A brief explanation for each is included, mostly to help you in later chapters.



This can be used when writing VCL. You want to look for the blocks that read vcl_ to identify VCL functions. The lines tell you how a return-statement in VCL will affect the VCL state engine at large, and which return statements are available where. You can also see which objects are available where.

This particular graph details the client-specific part of the VCL state engine.



This graph has the same format as the cache_req_fsm.png-one, but from the perspective of a backend request.



Of the three, this is the least practical flow chart, mainly included for completeness. It does not document much related to VCL or practical Varnish usage, but the internal state engine of an HTTP request in Varnish. It can sometimes be helpful for debugging internal Varnish issues.


Visualizing VCL

Posted on 2015-11-16

I was preparing to upgrade a customer, and ran across a semi-extensive VCL setup. It quickly became a bit hard to get a decent overview of what was going on.

The actual VCL is fairly simple.

To deal with this, I ended up hacking together a tiny awk/shell script to generate a dot graph of how things were glued together. You can find the script at .

The output is somewhat ugly, but useful.


(Click for full version)

Of note:

No idea if it's of interest to anyone but me, but I found it useful.


Magic Grace

Posted on 2015-09-25

I was hacking together a JavaScript varnishstat implementation for a customer a few days ago when I noticed something strange. I have put Varnish in front of the agent delivering stats, but I'm only caching the statistics for 1 second.

But the cache hit rate was 100%.

And the stats were updating?

Logically speaking, how can you hit cache 100% of the time and still get fresh content all the time?

Enter Grace

Grace mode is a feature Varnish has had since version 2.0 back in 2008. It is a fairly simple mechanic: Add a little bit of extra cache duration to an object. This is the grace period. If a request is made for the object during that grace period, the object is updated and the cached copy is used while updating it.

This reduces the thundering horde problem when a large amount of users request recently expired content, and it can drastically improve user experience when updating content is expensive.

The big change that happened in Varnish 4 was background fetches.

Varnish uses a very simple thread model (so to speak). Essentially, each session is handled by one thread. In prior versions of Varnish, requests to the backend were always tied to a client request.

  • Thread 1: Accept request from client 1
  • Thread 1: Look up content in cache
  • Thread 1: Cache miss
  • Thread 1: Request content from web server
  • Thread 1: Block
  • Thread 1: Get content from web server
  • Thread 1: Respond

If the cache is empty, there isn't much of a reason NOT to do this. Grace mode always complicated this. What PHK did to solve this was, in my opinion, quite brilliant in its simplicity. Even if it was a trade-off.

With grace mode, you HAVE the content, you just need to make sure it's updated. It looked something like this:

  • Thread 1: Accept request from client 1
  • Thread 1: Look up content in cache
  • Thread 1: Cache miss
  • Thread 1: Request content from web server
  • Thread 1: Block
  • Thread 1: Get content from web server
  • Thread 1: Respond

So ... NO CHANGE. For a single client, you don't have grace mode in earlier Varnish versions.

But enter client number 2 (or 3, 4, 5...):

  • Thread 1: Accept request from client 1
  • Thread 1: Look up content in cache
  • Thread 1: Cache miss
  • Thread 1: Request content from web server
  • Thread 1: Block
  • Thread 2: Accept request from client 2
  • Thread 2: Look up content in cache
  • Thread 2: Cache hit - grace copy is now eligible - Respond
  • Thread 1: Get content from web server
  • Thread 1: Respond

So with Varnish 2 and 3, only the first client will block waiting for new content. This is still an issue, but it does the trick for the majority of use cases.

Background fetches!

Background fetches changed all this. It's more complicated in many ways, but from a grace perspective, it massively simplifies everything.

With Varnish 4 you get:

  • Thread 1: Accept request from client 1
  • Thread 1: Look up content in cache
  • Thread 1: Cache hit - grace copy is now eligible - Respond
  • Thread 2: Request content from web server
  • Thread 2: Block
  • Thread 3: Accept request from client 2
  • Thread 3: Look up content in cache
  • Thread 3: Cache hit - grace copy is now eligible - Respond
  • Thread 2: Get content from web server

And so forth. Strictly speaking, I suppose this makes grace /less/ magical...

In other words: The first client will also get a cache hit, but Varnish will update the content in the background for you.

It just works.


What is a cache hit?

If I tell you that I have 100% cache hit rate, how much backend traffic would you expect?

We want to keep track of two ratios:

  • Cache hit rate - how much content is delivered directly from cache (same as today). Target value: 100%.
  • Fetch/request ratio: How many backend fetches do you initiate per client request. Target value: 0%.

For my application, a single user will result in a 100% cache hit rate, but also a fetch/request ratio of 100%. The cache isn't really offloading the backend load significantly until I have multiple users of the app. Mind you, if the application was slow, this would still benefit that one user.

The latter is also interesting from a security point of view. If you find the right type of request, you could end up with more backend fetches than client requests (e.g. due to restarts/retries).

How to use grace

You already have it, most likely. Grace is turned on by default, using a 10 second grace period. For frequently updated content, this is enough.

Varnish 4 changed some of the VCL and parameters related to grace. The important bits are:

  • Use beresp.grace in VCL to adjust grace for an individual object.
  • Use the default_grace parameter to adjust the ... default grace for objects.

If you want to override grace mechanics, you can do so in either vcl_recv by setting req.ttl to define a max TTL to be used for an object, regardless of the actual TTL. That bit is a bit mysterious.

Or you can look at vcl_hit. Here you'll be able to do:

if (obj.ttl + obj.grace > 0s && obj.ttl =< 0s) {
        // We are in grace mode, we have an object though
        if (req.http.x-magic-skip-grace-header ~ "yes") {
                return (miss);
        } else {
                return (delier);

The above example-snippet will evaluate of the object has an expired TTL, but is still in the grace period. If that happens, it looks for a client header called "X-Magic-Skip-Grace-Header" and checks if it contains the string "yes". If so, the request is treated as a cache miss, otherwise, the cached object is delivered.


Varnish Wishlist

Posted on 2015-09-19

I recently went back to working for Redpill Linpro, and thus started working with Varnish again, after being on the side lines for a few years.

I've been using Varnish since 2008. And a bit more than just using it too. There's been a lot of great change over time, but there are still things missing. I recently read and while I largely agree with Kacper, I think some of the bigger issues are missing from the list.

So here's my attempt to add to the debate.


Varnish needs TLS/SSL.

It's the elephant in the room that nobody wants to talk about.

The world is not the same as it was in 2006. Varnish is used for more and more sensitive sites. A larger percentage of Varnish installations now have some sort of TLS/SSL termination attached to it.

TLS/SSL has been a controversial issue in the history of Varnish Cache, with PHK (Principal architect of Varnish Cache - being an outspoken opponent of adding TLS in Varnish. There are valid reasons, and heartbleed has most certainly proven many of PHK's grievances right. But what does that matter when we use TLS/SSL anyway? It's already in the stack, we're just closing our eyes to it.

Setting up nginx in front of Varnish to get TLS/SSL, then nginx behind Varnish to get TLS/SSL... That's just silly. Why not just use nginx to cache then? The lack of TLS/SSL in Varnish is a great advertisement for nginx.

There are a lot of things I dislike about TLS/SSL, but we need it anyway. There's the hitch project (, but it's not really enough. We also need TLS/SSL to the backends, and a tunnel-based solution isn't enough. How would you do smart load balancing through that? If we don't add TLS/SSL, we might as well just forget about backend directors all together. And it has to be an integral part of all backends.

We can't have a situation where some backend directors support TLS/SSL and some don't.

Varnish Software is already selling this through Varnish Cache Plus, their proprietary version of Varnish. That is obviously because it's a deal breaker in a lot of situations. The same goes for basically any serious commercial actor out there.

So we need TLS/SSL. And we need it ASAP.


After speaking to PHK, let me clarify: He's not against adding support for TLS, but adding TLS itself. Varnish now supports the PROXY-protool which is added explicitly to improve support for TLS termination. Further such additions would likely be acceptable, always doing the TLS outside of Varnish.

Better procedures for VCL changes

With every Varnish version, VCL (The configuration language for Varnish) changes either a little bit, or a lot. Some of these changes are unavoidable due to internal Varnish changes. Some changes are to tweak the language to be more accurate (e.g. changing req.request to req.method, to reflect that it's the request method).

If Varnish is part of your day-to-day work, then this might not be a huge deal. You probably keep up-to-date on what's going on with Varnish anyway. But most users aren't there. We want Varnish to be a natural part of your stack, not a special thing that requires a "varnish-admin".

This isn't necessarily an easy problem to solve. We want to be able to improve VCL and get rid of old mistakes (e.g., changing req.request to req.method is a good thing for VCL). We've also changed the way to do error messages (or custom varnish-generated messages) numerous times. And how to create hitpass objects (a complicated aspect of any cache).

A few simple suggestions:

  • All VCL changes reviewed in public as a whole before the release process even starts. To avoid having to change it again two versions down the line.
  • Backward compatibility when possible. With warnings or even requiring an extra option to allow it. E.g.: req.request could easily still work, there's no conflict there. Never for forever, but perhaps to the end of a major version. Not everything will be backwards compatible, but some can.

I've had numerous complaints from highly skilled sysadmins who are frustrated by this aspect of Varnish. They just don't want to upgrade because they have to do what feels like arbitrary VCL changes every single time. Let's see if we can at least REDUCE that.


There's a lot of documentation for Varnish, but there's also a lot of bad documentation. Some issues:

  • People Google and end up on random versions on No, telling people "but there's a version right there so it's your own fault!" is not an acceptable solution. Varnish Software them self recently had a link in their Varnish Book where they used a link to "trunk" instead of "4.0", whereupon the "here is a complete list of changes between Varnish 3 and Varnish 4" link was actually a link to changes betwen Varnish 4.0 and the next version of Varnish.

  • "user guide" and "tutorial" and "installation"? Kill at least two and leave the others for blog posts or whatever. Hard enough to maintain one with decent quality.

  • Generated documentation needs to be improved. Example:

            STRING fileread(PRIV_CALL, STRING)
            Reads a file and returns a string with the content. Please
            note that it is not recommended to send variables to this
            function the caching in the function doesn't take
            this into account. Also, files are not re-read.
            set beresp.http.served-by = std.fileread("/etc/hostname");

    PRIV_CALL should clearly not be exposed! Other examples are easy enough to find.

    In addition, the Description is a mixture of reference documentation style and elaboration. Reference documentation should be clearly separated from analysis of consequences so technical users don't have to reverse-engineer a sentence of "don't do this because X" to figure out what the code actually does.

    And where are the details? What happens if the file can't be opened? What are the memory constraints? It says it returns the content of the file as a string, but what happens with binary content? There's clearly some caching of the file, but how does that work? Per session? Per VCL? Does that cache persist when you do varnishadm stop; varnishadm start? That's completely left out.

  • Rants mixed in with documentation? Get rid of "doc/shpinx/phk" ( and instead reference it somewhere else. should not be a weird blog-space. It clutters the documentation space. Varnish is not a small little project any more, it's grown past this.

VMOD packages

Varnish vmods are awesome. You can design some truly neat solutions using Open Source vmods, or proprietary ones.

But there are no even semi-official package repositories for the open source vmods. Varnish Software offers this to customers, but I really want it for the public too. Both for my own needs, and because it's important to improve Varnish and VMOD adaption.

Until you can do "apt-get install varnish-vmod-foo" or something like that, VMODS will not get the attention they deserve.

There are some projects in the works here, though, so stay tuned.


In case you missed it, I want TLS/SSL.

I want to be able to type https://<varnish host>

BTW: Regarding terminology, I decided to go with "TLS/SSL" instead of either "SSL" or "TLS" after some feedback. I suppose "TLS" is correct, but "SSL" is more recognized, whether we like it or not.


Cycling Norway

Posted on 2014-08-14

About 4 years ago I decided to get in better shape. I took my bike for the longest ride I had ridden at the time, which was about 7km I believe. This summer, I took the train to Stavanger and cycled back to Oslo along the coast. All in all, it was a little under 500km and took me a little over a week (8 days of cycling), with the longest single stretch being 131km (final stretch home).


I had done the same trip, more or less, the year before and had learned a lot. But frankly, preparations were pretty simple:

  1. Alert friends and family
  2. Make sure I have a functional bike.
  3. Make sure I've got luggage sorted out.
  4. Pack light.
  5. Book train ticket (Probably the hardest part)
  6. Ride the bike a lot.

I ride a relatively cheap Merida Cyclocross 3 for these sort of trips, which gives a nifty mix of speed and durability. And lets me attach a luggage rack. This year, I was trying out slick tyres, as I knew most of the route was going to be asphalt.

Stage 0: Getting to Stavanger

Strava log:

As it turns out, this was harder than it should have been. NSB (Norwegian Rail company, what the proper English name is is anybody's guess, since the "Company Information" page they provide in English is blank) provide separate tickets for bikes, and you can book it ahead of time. There's a max price which means that for long-ish distances, bringing your bike is pretty cheap (175kr or something like that is the max price). In theory, this is all pretty awesome. However...

  • Problem 1: Can't book bike tickets on-line.
  • Problem 2: Construction work on parts of the final stretch to Stavanger. Buses instead of trains.
  • Problem 3: NSB didn't seem able to book my bike past the point where the buses start replacing the trains. E.g.: Bike on train == fine, bike on bus == ????.
  • Problem 4: Guy at the station doesn't actually know how far the train will go. Sells me tickets for Egersund, while the train now goes to Bryne as I later discover. Tells me to speak to the bus drivers regarding the bike and assumes it'll be no problem to take the bus with my bike, despite no official ticket/booking.
  • Problem 5: Wait, the trains goes to Bryne? That means no buses in Egersund, where I have tickets for.
  • Problem 6: NEXT guy, day later, doesn't realize this is a problem created by NSB, so I have to buy a NEW set of tickets for Bryne...
  • Problem 7: Wait, what, my bike has paid 90,- NOK extra for free coffee, news papers and wifi? And has a seat reserved for it, not a place in the goods carriage?

In short: One big mess.

Thankfully, NSB customer services sorted it all out once I wrote them an e-mail, and I got my extra money back, got a ride to Bryne (where my brother picked me up), and had two seats to myself since my bike still had one reserved...

By the way: I lied, I rode from Forus, not Stavanger. Don't tell anyone.

Stage 1: "Stavanger" - Egersund

Strava log:

In short: Wicked rain followed by great sun, two flat tires on the back wheel, lots of nice, flat terrain, swearing about stupid detours triggered by bicycle routes being made for sightseeing, not transport, then this:


Some times, detours are worth it.

Also ran into parts of what's called "Den Vestlandske Hovedvei" (The western main road). For anyone cycling, please be aware: That's signage-code for "Ridiculously steep hills, bad, unpaved road, road stops at the bottom of hills, completely suboptimal cycling terrain, and also very scenic.



/pics/jpegs/horsy.jpeg /pics/jpegs/horsy2.jpeg

They never moved. I'm not particularly afraid of horses, but I'm reluctant to pass behind horses I'm not familiar with and who are not accompanied by a human, so I ended up going around them in the ditch.


Stage 2: Egersund - Flekkefjord

Strava log:

In short: No rain, until the last 30 minutes, lots of steep hills, both up and down, switch the actual tyre this time after the third tube change, staying with Bendik in Flekkefjord was great(Heck, I had a whole floor to myself).


The Jøssingfjord is famous due to the Altmark Incident (, a skirmish during World War 2 where Norwegian neutrality was breached.

/pics/jpegs/jossingfjord-memorial.jpeg /pics/jpegs/double_roof_small.jpeg

And finally, Flekkefjord:


Not exactly the best weather, but my host was great and good company easily makes up for bad weather!

Stage 3: Flekkefjord - Mandal

Strava log:

In short: Forecast: 30°C and sun. Reality: CONSTANT RAIN.

Started the day by getting a lift from my generous host. Cycling Flekkefjord - Lyngdal using the official cycling route is a gigantic detour since they recently built a large amount of tunnels and a GREAT new bridge across Fedafjorden. None of which you are allowed to cycle on. Then the official cycle route take you along the coast on mostly gravel roads. Wasn't doing that again.

Since the forecast was so good, I figured the little rain I had during the start of the ride was just left-overs from the thunderstorm we had during the night. So I only used a rain jacket. No rain pants. No rain covers for my shoes. BIG BIG mistake, as it was raining through the entire ride. Just as I was entering Mandal the weather cleared up.

As for the route: Good pick. All downhill on GOOD unpaved roads to Lyngdal. One big hill up from Lyngdal, then mixed. The last stretch is steep climbing on wooden roads. If allowed: Cycle on E39! It'll save you a LOT of energy.

Stage 4: Mandal - Kristiansand

Strava log:

In short: GREAT ride. Great scenery, good roads, good weather, little or no traffic. Almost lost the Strava log, but managed to resurrect it when I got home.


Also got to spend the eveing with Vegard and his family, which were nice enough to take me on a boat trip in the Kristiansand area. I even got one of those so called selfies:


Stage 5: Kristiansand - Grimstad

Strava log:

In short: Great weather, great road, then "Vestlandske Hovedvei" again. Yeah, turned out to be an other stretch of steep climbs on bad gravel road. Oh well.


Stage 6: Grimstad - Risør

Strava log:

In short: Nice ride (I finally figured out Tvedestrand), heavy rain during the last 20 minutes left me completely soaking wet, but otherwise happy.


This probably takes care of tailgaters.

Stage 7: Risør - Kragerø (sort of)

Strava log:

In short: Nice ferry ride, by far the shortest ride of the trip (hardly even counts), met an other cyclist on the ferry from Risør and we cycled together to Stabbestad. Good roads, but somewhat boring scenery.


Stage 8: Kragerø - Oslo (sort of)

Strava log: (slightly broken).

In short: "Let's just do this and get home." Ridiculously warm. Dad gave me a lift from Tåtøy to Helgeroa, then Garmin lost the data between Helgeroa and my first pit-stop in Larvik.

Tip: Do NOT follow the official cycle route here if you want to make good progress. It's very scenic and nice, but also very slow as it takes you through forest paths and whatnot. Following my route from Helgeroa to Horten was very fast and easy (flat).

The only picture I really took was from the ferry crossing from Horten to Moss:


(That's Horten in the background).


  • Bike: Merdia Cyclocross 3 (2013 model)
  • Tyres: Continental slicks, then some Maxxis rear tyre.
  • Cheap bags from G-Sport/GMAX. Worked OK, but the lack of a proper stiff plate between the bag and the wheel meant that the bags gradually got closer to the wheels as the trip progressed. Not a problem now, but I probably wont take them for an other long trip.
  • Shoes: Bright orange Giro shoes with MTB cleats/pedals. Bought because they have a Vibram sole, which means two things: Good grip when you're OFF the bike, and you can walk around without sounding like you're wearing slalom boots or something like that.
  • Garmin Edge 810, with maps from (OSM is THE best source for cycling maps). The Edge 810+OSM combo works great for long travel, but I did have to resort to using my phone + Google maps once or twice. All in all not bad. Really nice to have a map in front of you when riding on unknown paths. As for the Edge itself... It shut down spontaneously 4-5 times (good battery), lost the Helgeroa-Larvik stretch and ALMOST lost me the Mandal-Kristiansand stretch (Non-technical users would probably not have realized you could recover it). But I'd still use it again.
  • Lights: Knog lights. This is not for night cycling (we have something like 4 hours of night-time at this time of year), but for tunnels and really bad weather. The type of lights don't matter that much, but you'll feel a lot better if you bring them and then need them.
  • Cloths: I typically ride with a regular cycling bib (Assos) under some sort of loose terrain cloths. I feel better off the bike with "normal" cloths, and it feels good on the bike anyway.
  • Kindle!!!! I read a lot on my rides (well, in the evenings and when I stop for a break, anyway).
  • Camera (Brought a huge SLR this time. Total overkill, but meh)
  • "Civilian cloths": This makes the end points much nicer. The trick is to have overlapping cloths. A fleece sweater works fine when playing cards on a late night, and is a good backup for really cold weather, for example. But proper civilian cloths makes the trip significantly more leisurely (and you're less of a burden to your hosts).

Next time: Smaller camera. Less "Stuff". Stronger rear tyre from the start.


The trip was a really great success. What really made it, though, was all the stops along the way.

I want to thank everyone who let me stay with them, often on short notice.

My next multi-day trip is probably going to be mountain biking, but one thing is certain: I've come a long way since I decided to get in shape 4 years ago.