Kristian Lyngstøl's Blog

Cycling Norway

Posted on 2014-08-14

About 4 years ago I decided to get in better shape. I took my bike for the longest ride I had ridden at the time, which was about 7km I believe. This summer, I took the train to Stavanger and cycled back to Oslo along the coast. All in all, it was a little under 500km and took me a little over a week (8 days of cycling), with the longest single stretch being 131km (final stretch home).


I had done the same trip, more or less, the year before and had learned a lot. But frankly, preparations were pretty simple:

  1. Alert friends and family
  2. Make sure I have a functional bike.
  3. Make sure I've got luggage sorted out.
  4. Pack light.
  5. Book train ticket (Probably the hardest part)
  6. Ride the bike a lot.

I ride a relatively cheap Merida Cyclocross 3 for these sort of trips, which gives a nifty mix of speed and durability. And lets me attach a luggage rack. This year, I was trying out slick tyres, as I knew most of the route was going to be asphalt.

Stage 0: Getting to Stavanger

Strava log:

As it turns out, this was harder than it should have been. NSB (Norwegian Rail company, what the proper English name is is anybody's guess, since the "Company Information" page they provide in English is blank) provide separate tickets for bikes, and you can book it ahead of time. There's a max price which means that for long-ish distances, bringing your bike is pretty cheap (175kr or something like that is the max price). In theory, this is all pretty awesome. However...

  • Problem 1: Can't book bike tickets on-line.
  • Problem 2: Construction work on parts of the final stretch to Stavanger. Buses instead of trains.
  • Problem 3: NSB didn't seem able to book my bike past the point where the buses start replacing the trains. E.g.: Bike on train == fine, bike on bus == ????.
  • Problem 4: Guy at the station doesn't actually know how far the train will go. Sells me tickets for Egersund, while the train now goes to Bryne as I later discover. Tells me to speak to the bus drivers regarding the bike and assumes it'll be no problem to take the bus with my bike, despite no official ticket/booking.
  • Problem 5: Wait, the trains goes to Bryne? That means no buses in Egersund, where I have tickets for.
  • Problem 6: NEXT guy, day later, doesn't realize this is a problem created by NSB, so I have to buy a NEW set of tickets for Bryne...
  • Problem 7: Wait, what, my bike has paid 90,- NOK extra for free coffee, news papers and wifi? And has a seat reserved for it, not a place in the goods carriage?

In short: One big mess.

Thankfully, NSB customer services sorted it all out once I wrote them an e-mail, and I got my extra money back, got a ride to Bryne (where my brother picked me up), and had two seats to myself since my bike still had one reserved...

By the way: I lied, I rode from Forus, not Stavanger. Don't tell anyone.

Stage 1: "Stavanger" - Egersund

Strava log:

In short: Wicked rain followed by great sun, two flat tires on the back wheel, lots of nice, flat terrain, swearing about stupid detours triggered by bicycle routes being made for sightseeing, not transport, then this:


Some times, detours are worth it.

Also ran into parts of what's called "Den Vestlandske Hovedvei" (The western main road). For anyone cycling, please be aware: That's signage-code for "Ridiculously steep hills, bad, unpaved road, road stops at the bottom of hills, completely suboptimal cycling terrain, and also very scenic.



/pics/jpegs/horsy.jpeg /pics/jpegs/horsy2.jpeg

They never moved. I'm not particularly afraid of horses, but I'm reluctant to pass behind horses I'm not familiar with and who are not accompanied by a human, so I ended up going around them in the ditch.


Stage 2: Egersund - Flekkefjord

Strava log:

In short: No rain, until the last 30 minutes, lots of steep hills, both up and down, switch the actual tyre this time after the third tube change, staying with Bendik in Flekkefjord was great(Heck, I had a whole floor to myself).


The Jøssingfjord is famous due to the Altmark Incident (, a skirmish during World War 2 where Norwegian neutrality was breached.

/pics/jpegs/jossingfjord-memorial.jpeg /pics/jpegs/double_roof_small.jpeg

And finally, Flekkefjord:


Not exactly the best weather, but my host was great and good company easily makes up for bad weather!

Stage 3: Flekkefjord - Mandal

Strava log:

In short: Forecast: 30°C and sun. Reality: CONSTANT RAIN.

Started the day by getting a lift from my generous host. Cycling Flekkefjord - Lyngdal using the official cycling route is a gigantic detour since they recently built a large amount of tunnels and a GREAT new bridge across Fedafjorden. None of which you are allowed to cycle on. Then the official cycle route take you along the coast on mostly gravel roads. Wasn't doing that again.

Since the forecast was so good, I figured the little rain I had during the start of the ride was just left-overs from the thunderstorm we had during the night. So I only used a rain jacket. No rain pants. No rain covers for my shoes. BIG BIG mistake, as it was raining through the entire ride. Just as I was entering Mandal the weather cleared up.

As for the route: Good pick. All downhill on GOOD unpaved roads to Lyngdal. One big hill up from Lyngdal, then mixed. The last stretch is steep climbing on wooden roads. If allowed: Cycle on E39! It'll save you a LOT of energy.

Stage 4: Mandal - Kristiansand

Strava log:

In short: GREAT ride. Great scenery, good roads, good weather, little or no traffic. Almost lost the Strava log, but managed to resurrect it when I got home.


Also got to spend the eveing with Vegard and his family, which were nice enough to take me on a boat trip in the Kristiansand area. I even got one of those so called selfies:


Stage 5: Kristiansand - Grimstad

Strava log:

In short: Great weather, great road, then "Vestlandske Hovedvei" again. Yeah, turned out to be an other stretch of steep climbs on bad gravel road. Oh well.


Stage 6: Grimstad - Risør

Strava log:

In short: Nice ride (I finally figured out Tvedestrand), heavy rain during the last 20 minutes left me completely soaking wet, but otherwise happy.


This probably takes care of tailgaters.

Stage 7: Risør - Kragerø (sort of)

Strava log:

In short: Nice ferry ride, by far the shortest ride of the trip (hardly even counts), met an other cyclist on the ferry from Risør and we cycled together to Stabbestad. Good roads, but somewhat boring scenery.


Stage 8: Kragerø - Oslo (sort of)

Strava log: (slightly broken).

In short: "Let's just do this and get home." Ridiculously warm. Dad gave me a lift from Tåtøy to Helgeroa, then Garmin lost the data between Helgeroa and my first pit-stop in Larvik.

Tip: Do NOT follow the official cycle route here if you want to make good progress. It's very scenic and nice, but also very slow as it takes you through forest paths and whatnot. Following my route from Helgeroa to Horten was very fast and easy (flat).

The only picture I really took was from the ferry crossing from Horten to Moss:


(That's Horten in the background).


  • Bike: Merdia Cyclocross 3 (2013 model)
  • Tyres: Continental slicks, then some Maxxis rear tyre.
  • Cheap bags from G-Sport/GMAX. Worked OK, but the lack of a proper stiff plate between the bag and the wheel meant that the bags gradually got closer to the wheels as the trip progressed. Not a problem now, but I probably wont take them for an other long trip.
  • Shoes: Bright orange Giro shoes with MTB cleats/pedals. Bought because they have a Vibram sole, which means two things: Good grip when you're OFF the bike, and you can walk around without sounding like you're wearing slalom boots or something like that.
  • Garmin Edge 810, with maps from (OSM is THE best source for cycling maps). The Edge 810+OSM combo works great for long travel, but I did have to resort to using my phone + Google maps once or twice. All in all not bad. Really nice to have a map in front of you when riding on unknown paths. As for the Edge itself... It shut down spontaneously 4-5 times (good battery), lost the Helgeroa-Larvik stretch and ALMOST lost me the Mandal-Kristiansand stretch (Non-technical users would probably not have realized you could recover it). But I'd still use it again.
  • Lights: Knog lights. This is not for night cycling (we have something like 4 hours of night-time at this time of year), but for tunnels and really bad weather. The type of lights don't matter that much, but you'll feel a lot better if you bring them and then need them.
  • Cloths: I typically ride with a regular cycling bib (Assos) under some sort of loose terrain cloths. I feel better off the bike with "normal" cloths, and it feels good on the bike anyway.
  • Kindle!!!! I read a lot on my rides (well, in the evenings and when I stop for a break, anyway).
  • Camera (Brought a huge SLR this time. Total overkill, but meh)
  • "Civilian cloths": This makes the end points much nicer. The trick is to have overlapping cloths. A fleece sweater works fine when playing cards on a late night, and is a good backup for really cold weather, for example. But proper civilian cloths makes the trip significantly more leisurely (and you're less of a burden to your hosts).

Next time: Smaller camera. Less "Stuff". Stronger rear tyre from the start.


The trip was a really great success. What really made it, though, was all the stops along the way.

I want to thank everyone who let me stay with them, often on short notice.

My next multi-day trip is probably going to be mountain biking, but one thing is certain: I've come a long way since I decided to get in shape 4 years ago.


The Architecture the Varnish Agent

Posted on 2013-02-15

Designing software architecture is fun.

The Varnish Agent 2 was written as a replacement for the original Varnish Agent. They both share the same purpose: Expose node-specific Varnish features to a management system. They are design very differently, though.

In this post I'd like to explain some choices that were made, and show you how to write your own code for the Varnish Agent 2. It's really not that hard.

The code can be found at:

Why C ?

The choice of C as a language was made fairly early. One of the main reasons is that Varnish itself is written in C, as are all the tools for Varnish. This means that the by far best supported APIs for talking to Varnish are written in C.

But an other reason is because C is a very good language. It has become a false truth that you "never write web apps in C", more or less. There are good reasons for this: It takes time to set things up in C, C isn't very forgiving and perhaps most importantly: people generally suck at C.

In the end, we chose C because it was the right tool for the job.


When designing a new system, it's important to know what you're trying to achieve, and perhaps just as important to know what you're /not/ trying to achieve.

The Varnish Agent is designed to:

  • Manage a single Varnish server.
  • Remove the need for management frontends to know the Varnish CLI language.
  • Expose log data
  • Persist configuration changes
  • Require "0" configuration of the agent itself
  • Ensure that Varnish works on boot, even if there is no management front-end present.
  • Be expandable without major re-factoring.
  • Be easy to expand

What we did NOT want was:

  • Support for running the agent on a different machine than the Varnish server.
  • Elaborate self-management of the agent (e.g: support for users, and management of them).
  • Mechanisms that are opaque to a system administrator
  • Front-end code mixed with back-end code
  • "Sessions"

We've achieved pretty much all of these goals.

The heart of the agent: The module

At the heart of the agent, there is the module. As of this writing, there are 14 modules written. The average module is 211 lines of C code (including copyright and license). The smallest module, the echo module, is 92 lines of code (the echo plugin is an example plugin with extensive self documentation). The largest modules, the vlog and vcl modules, are both 387 lines of code.

To make modules useful, I spent most of the initial work on carving out how modules should work. This is currently how it works:

  • You define a module, say, src/modules/foobar.c
  • You write foobar_init(). This function is the only absolutely required part of the function. It will be run in the single-threaded stage of the agent.
  • You either hook into other modules (like the httpd-module), or define a start function.
  • After all plugins are initialized, the start function of each plugin is executed, if present.

That's it.

Since a common task is inter-operation between plugins, an IPC mechanism was needed. I threw together a simple message passing mechanism, inspired by varnish. This lives in src/ipc.c and include/ipc.h. The only other way to currently talk to other modules is through httpd_register (and logger(), but that's just a macro for ipc_run()).

If you want your foobar.c-plugin to talk to the varnish CLI, you want to go through the vadmin-plugin. This is a two-step process:

int handle;

void foobar_init(struct agent_core_t *core)
    handle = ipc_register(core, "vadmin");

This part of the code gives you a socket to talk to the vadmin module. Actually talking to other modules in foobar_init() is not going to work, since the module isn't started yet.

And proper etiquette is not to use a global variable, but to use the plugin structure for your plugin, present in core:

struct foobar_priv_t {
        int vadmin;
void foobar_init(struct agent_core_t *core)
        struct foobar_priv_t *priv = malloc(sizeof(struct echo_priv_t));
        struct agent_plugin_t *plug;
        plug = plugin_find(core,"foobar");
        priv->vadmin = ipc_register(core,"vadmin");
        plug->data = (void *)priv;
        plug->start = NULL;

In this example, we have a private data structure for the module, which we allocate in the init function. Every function has a generic struct agent_plugin_t data structure already allocated for it and hooked on to the core->plugins list. This allows you to store generic data, as the core-data structure is the one typically passed around.


The varnish agent uses a lot of assert()s. This is similar to what Varnish does. It lets you, the developer, state that we assume this worked, but if it didn't you really shouldn't just continue. It's excellent for catching obscure bugs before they actually become obscure. And it's excellent for letting you know where you actually need proper error code.

Let's take a closer look at the generic struct agent_plugin_t:

struct agent_plugin_t {
        const char *name;
        void *data;
        struct ipc_t *ipc;
        struct agent_plugin_t *next;
        pthread_t *(*start)(struct
                            agent_core_t *core, const
                            char *name);
        pthread_t *thread;

The name should be obvious. The void *data is left for the plugin to define. It can be ignored if your plugin doesn't need any data at all (what does it do?).

struct ipc_t *ipc is the IPC-structure for the plugin. This tells you that all plugins have an IPC present. This is to allow you to run ipc_register() before a plugin has initialized itself. Otherwise we'd have to worry a lot more about which order modules were loaded.

Next is *next. This is simply because the plugins are par of a linked list.

the start() function-pointer is used to define a function that will start your plugin. This function can do pretty much anything, but have to return fairly fast. If it spawns off a thread, it's expected that it will return the pthread_t * data structure, as the agent will later wait for it to join. Similar, *thread is used for the same purpose.

Using the IPC

You've got a handle to work with, let's use it. To do that, let's look at the vping plugin, starting with init and start:

static pthread_t *
vping_start(struct agent_core_t *core, const char *name)
        pthread_t *thread = malloc(sizeof (pthread_t));
        return thread;

vping_init(struct agent_core_t *core)
        struct agent_plugin_t *plug;
        struct vping_priv_t *priv = malloc(sizeof(struct vping_priv_t));
        plug = plugin_find(core,"vping");

        priv->vadmin_sock = ipc_register(core,"vadmin");
        priv->logger = ipc_register(core,"logger");
        plug->data = (void *)priv;
        plug->start = vping_start;

vping_init() grabs a handle for the vadmin (varnish admin interface) plugin, and the logger. It also assigns vping_start() to relevant pointer.

vping_start() simply spawns a thread that runs vping_run.

static void *vping_run(void *data)
        struct agent_core_t *core = (struct agent_core_t *)data;
        struct agent_plugin_t *plug;
        struct vping_priv_t *ping;
        struct ipc_ret_t vret;

        plug = plugin_find(core,"vping");
        ping = (struct vping_priv_t *) plug->data;

        logger(ping->logger, "Health check starting at 30 second intervals");
        while (1) {
                ipc_run(ping->vadmin_sock, &vret, "ping");
                if (vret.status != 200)
                        logger(ping->logger, "Ping failed. %d ", vret.status);

                ipc_run(ping->vadmin_sock, &vret, "status");
                if (vret.status != 200 || strcmp(vret.answer,"Child in state running"))
                        logger(ping->logger, "%d %s", vret.status, vret.answer);
        return NULL;

The vping module was the first module written. Written before the varnish admin interface was a module. It simply pings Varnish over the admin interface.

This also illustrates how to use the logger: Grab a handle, then use logger(handle,fmt,...), similar to how you'd use printf().

The IPC mechanism returns data through a vret-structure. For vadmin, this is precisely how Varnish would return it.


ipc_run() dynamically allocates memory for ret->answer. FREE IT.

The logger also returns a vret-like structure, but the logger() macro handles this for you.

Hooking up to HTTP!

Hooking up to HTTP is ridiculously easy.

Let's look at echo, comments removed:

struct echo_priv_t {
        int logger;

static unsigned int echo_reply(struct httpd_request *request, void *data)
        struct echo_priv_t *echo = data;
        logger(echo->logger, "Responding to request");
        send_response(request->connection, 200, request->data, request->ndata);
        return 0;

void echo_init(struct agent_core_t *core)
        struct echo_priv_t *priv = malloc(sizeof(struct echo_priv_t));
        struct agent_plugin_t *plug;
        plug = plugin_find(core,"echo");
        priv->logger = ipc_register(core,"logger");
        plug->data = (void *)priv;
        plug->start = NULL;
        httpd_register_url(core, "/echo", M_POST | M_PUT | M_GET, echo_reply, priv);

This is the ENTIRE echo plugin. httpd_register_url() is the key here. It register a url-base, /echo in this case, and a set of request methods (POST, PUT and GET in this case. DELETE is also supported). A callback to execute and some optional private data.

The echo_reply function is now executed every time a POST, PUT or GET request is received for URLs starting with /echo.

You can respond with send_response() as demonstrated above, or the shorthands send_response_ok(request->connection, "Things are all OK!"); and send_response_fail(request->connection, "THINGS WENT BAD");.


Currently all http requests are handled in a single thread. This means you really really shouldn't block.

But make sure it's written with thread safety in mind. We might switch to a multi-threaded request handler in the future.

Know your HTTP

"REST"-interfaces are great, if implemented correctly. A short reminder:

  • GET requests are idempotent and should not cause side effects. They should be purely informational.
  • PUT requests are idempotent, but can cause side effects. Example: PUT /start can be run multiple times.
  • POST requests do not have to be idempotent, and can cause side effects. Example: POST /vcl/ will upload new copies of the VCL.
  • DELETE requests are idempotent, and can have side effects. Example: DELETE /vcl/foobar.

Test your code!

Unused code is broken code. Untested code is also broken code.

Pretty much all functionality is tested. Take a look in tests/.

If your code is to be included in an official release, someone has to write test cases.

I also advise you to add something in html/index.html to test it if that's feasible. It also tends to be quite fun.

Getting started

To get started, grab the code and get crackin'.

I advise you to read include/*.h thoroughly.


The Varnish Agent 2.1

Posted on 2013-01-31

We just released the Varnish Agent 2.1.

(Nice when you can start a blog post with some copy/paste!)

Two-ish weeks ago we released the first version of the new Varnish Agent, and now I have the pleasure of releasing a slightly more polished variant.

The work I've put in with it the last couple of weeks has gone towards increasing stability, resilience and fault tolerance. Some changes:

For a complete-ish log, see the closed tickets for the 2.1 milestone on github.

This underlines what we seek to achieve with the agent: A rock stable operational service that just works.

If you've got any features you'd like to see in the agent, this is the time to bring them forth!

I've already started working on 2.2 which will include a much more powerful API for the varnishlog data (see docs/LOG-API.rst in the repo), and improved HTTP handling, including authentication.

So head over to the demo, play with it, if you break it, let me know! Try to install the packages and tell me about any part of the installation process that you feel is awkward or not quite right.


The Varnish Agent

Posted on 2013-01-22

We just released the Varnish Agent 2.0.

The Varnish Agent is a HTTP REST interface to control Varnish. It also provides a proof of concept front-end in html/JavaScript. In other words: A fully functional Web UI for Varnish.

We use the agent to interface between our commercial Varnish Administration Console and Varnish. This is the first agent written in C and the first version exposing a HTTP REST interface, so while 2.0 might suggest some maturity, it might be wiser to consider it a tech preview.


I've written the agent for the last few weeks, and it's been quite fun. This is the first time I've ever written JavaScript, and it was initially just an after thought that quickly turned into something quite fun.

Some features:

I've had a lot of fun hacking on this and I hope you will have some fun playing with it too!


Tools of the trade - Job control

Posted on 2012-11-03

About the Tools of the trade series

After over a decade of using GNU/Linux, you pick up a few tricks. They become second nature to you. You don't even think about them when you're using them. They enter your regular tool chest, so to speak.

This blog post is the first in a series of what I hope to be many posts where I introduce basic tools, tricks and techniques to those of you who are less experienced with GNU/Linux. The goal is not to make you an expert on the tools, but to get you started and show you a few use cases.

As I hope to make this a series, please let me know if the style, topic and level of detail is appropriate, or if there are any particular topics that you're interested in.

If you enjoy the series, feel free to subscribe to the RSS, either for the entire blog ( or just these TOTT (tools of the trade) posts ( And of course, I'd appreciate it if you helped me spread the word to others who could find these posts interesting, even if they might not be for you.

Now, let's get started with job control and screen, two very simple tools that can make your life easier.

Job control, you say?

How often do you do this:

  • Open service_foo.conf
  • Edit
  • Save and close service_foo.conf
  • Restart the service foo
  • Get a syntax error
  • Reopen service_foo.conf
  • Navigate to the same position you were at
  • Edit
  • Save
  • Try restart,
  • etc etc

It's pretty common.


$ long_running_command
# Darn, should've started it in the background instead!
$ long_running_command &

All of these situations can be dealt with using basic job control in your shell. Most proper shells have some job control, but since it's by far the most common shell, we'll talk about how bash handles it.

It's actually very simple. Here's what you need to know:

Action Effect
CTRL-Z Stops the currently active job
$ jobs Lists all jobs and their state
$ fg Wakes up the job most frequently stopped
$ fg x Wakes up job x, where x can be seen using the jobs command.
$ bg Sends the job most frequently stopped to the background, as if you started it with &.
$ bg x Sends job x to the background.

A job can be any command that would normally run in the foreground. You can also use %prefix instead of the job number, where the prefix is the command you started. For instance if you run man bash to read up on job control, then stop it, you could resume the job with fg %man.

Stopping a job is not the same as putting it in the background. When you stop a job, it actually stops running. For your editor, this doesn't matter. Here's a simple example where I just have a script output the time:

kristian@luke:~$ ( while sleep 1; do date ; done )
Sat Nov  3 03:05:16 CET 2012
Sat Nov  3 03:05:17 CET 2012
Sat Nov  3 03:05:18 CET 2012
Sat Nov  3 03:05:20 CET 2012
Sat Nov  3 03:05:21 CET 2012
Sat Nov  3 03:05:22 CET 2012
Sat Nov  3 03:05:23 CET 2012
[2]+  Stopped                 ( while sleep 1; do
done )
kristian@luke:~$ date
Sat Nov  3 03:05:33 CET 2012
kristian@luke:~$ jobs
[1]-  Stopped                 man bash
[2]+  Stopped                 ( while sleep 1; do
done )
kristian@luke:~$ fg 2
( while sleep 1; do
done )
Sat Nov  3 03:05:42 CET 2012
Sat Nov  3 03:05:43 CET 2012
Sat Nov  3 03:05:44 CET 2012
Sat Nov  3 03:05:45 CET 2012

Notice how there is no time stamps printed for the time the command was stopped.

If you wanted that, you would have to put the job in the background. When you do put jobs in the background their output will generally pop up in your shell, just like what would happen if you use & without redirecting output.

There are a few shortcuts to job control too, though I personally don't use them. Take a look at the Job Control chapter in man bash for more.


Using your shell's job control is great for manipulating jobs within a single open shell. But it has many limitations too. And it doesn't allow you to stop a job in one shell and open it up again in an other (perhaps at a later time from an other machine).

Screen is most famous for allowing you to keep programs running even if you lose your connection.

Screen is a simple wrapper around any command you run. You typically start screen with just screen and end up in a plain shell. You can also start a single command directly, for instance using screen irssi. Under the hood you've now created a screen "server" which is what your applications are connected to, and a screen "client" which is what your terminal is looking at. If you close your terminal, the client will stop, but the server will keep running and the applications inside it will be unaware of the disappearance of the terminal. You can also detach from the server manually by hitting ctrl-a d. All screen-bindings start with ctrl-a. I'll have a little list further down.

Here's a demo:

kristian@luke:~$ cat
while sleep 1; do
        date | tee -a screen-demo.log;
kristian@luke:~$ screen ./

(date printing starts)
^A d (detach)

[detached from 24859.pts-3.luke]
kristian@luke:~$ date
Sat Nov  3 03:24:53 CET 2012
kristian@luke:~$  tail -n 2 -f screen-demo.log
Sat Nov  3 03:24:53 CET 2012
Sat Nov  3 03:24:54 CET 2012
Sat Nov  3 03:24:55 CET 2012
Sat Nov  3 03:24:56 CET 2012
Sat Nov  3 03:24:57 CET 2012
Sat Nov  3 03:24:58 CET 2012
(keeps running)

The basics of screen are:

  • screen starts screen with a regular shell.
  • screen app starts screen running app. The app-argument can include arguments. screen irssi -! will start screen and irssi -!.
  • All screen-commands start with CTRL-a (^A).
  • CTRL-a d (^A d) detaches from screen. This happens automatically if you close the terminal or your ssh connection breaks or similar.
  • screen -r re-attaches to a screen session. If you have multiple screens running you will have to specify which one (it will prompt you to).
  • screen -r -d re-attaches to a screen session that you are still attached to somewhere else. This means that if you ssh to a server at work and open screen but forget to close it, you can take over that screen session when you get home for example.
  • screen -x attaches to a screen session without detaching any other screen clients. A good use case is ssh'ing to a server, starting screen and having your customer do the same with screen -x so he can see exactly what you're doing and even type himself. It's quite cool, so try it out!

Screen can also have multiple 'windows' inside a session. I mostly use "full screen windows" as they are simplest. Try it out while running screen:

  • Hit ^A c to create a new window.
  • Hit ^A n to go to the next window.
  • Hit ^A p to go to the previous window.
  • Hit ^A a to go to the window you were at last.

You can also show multiple windows at the same time (split screen) and jump to specific windows if you have many (e.g: jump from window 1 to 6 without going through window 2, 3, 4 and 5.). Check the screen manual page for more.

Screen has some quirks with regards to scrolling, though, so you may want to check out the man page for that too.


Ever need to re-configure network stuff over ssh?

Run the commands in screen.

What I often do is something along the lines of: ifdown eth0; sleep 5; ifup eth0; sleep 60 && ifconfig eth0 some-safe-ip for instance. This ensures that the commands run even if the connection drops. It also allows you to regain your old session if you have to reconnect.

Minor tips

  • Use tee file if you both want to write the out to a file (echo blatti > foo) and want to see the output at the same time. tee -a will append instead of overwrite, similar to what >> does.
  • Simple bash loops are wonderful for testing. I use variations of while true; do blatti; done, while sleep 1; do blatti; done; and for a in foo bar bat; do echo $a; done frequently.