Reverse ssh tunnel

My ISP started to roll out broken IPv6 for home users, so my services aren’t available from outside anymore. I don’t need a full vpn solution, but sometimes I just want to ssh home to check a file etc. The simplest solution was to create a reverse ssh tunnel. The raspberry pi inside my home network connects to my public server via ssh. Logged in on the server I can connect to a local port and get forwarded to the raspberry. That works for me really well.

Since wifi is a little bit flaky, I need to make sure, that the ssh connection is reopened when there is a connection loss. You can write a very simple script like this and use a cronjob to execute it.

#!/bin/bash

COUNT=$(ps ax | grep 'ssh -Nf -R' | wc -l)

if [ $COUNT -eq 1 ]
then
    echo "No tunnel yet. Creating..."
    ssh -Nf -R LOCALPORT:localhost:PORT user@remote
else
    echo "Tunnel already exists. Aborting."
fi

But I just found out about autossh. Which does the monitoring for you. I tried to get it working with systemd, but without any success. Ideas are welcome.

$ cat /etc/systemd/system/autossh-tunnel.service
[Unit]
Description=reverse ssh tunnel
Wants=network-online.target
After=network-online.target

[Service]
Type=simple
User=localuser
ExecStart=/usr/bin/autossh -f -M 0 remote -l remoteuser -N -o "ServerliveInterval 60" -o "ServerAliveCountMax 3" -R LOCALPORT:localhost:PORT
ExecStop=/usr/bin/pkill autossh
Restart=always

$ sudo systemctl enable autossh-tunnel.service
$ systemctl start autossh-tunnel.service

Looking at journalctl, I can see the exit but no reason. Executing the command manually works fine.

systemd[1]: Starting reverse ssh tunnel...
systemd[1]: Started reverse ssh tunnel.
autossh[2468]: port set to 0, monitoring disabled
autossh[2474]: starting ssh (count 1)
ssh child pid is 2476
received signal to exit (15)

In the end I modified the bash script to use autossh.

#!/bin/bash

COUNT=$(ps ax | grep 'autossh' | wc -l)

if [ $COUNT -eq 1 ]
then
  echo "No tunnel yet. Creating..."
  /usr/bin/autossh -f -M 0 remote -l remoteuser -N -o "ServerliveInterval 60" -o "ServerAliveCountMax 3" -R LOCALPORT:localhost:PORT
else
  echo "Tunnel already exists. Aborting."
fi

If you have a better solution, let me know.

Building a cross-platform app with Apache Cordova, Ionic 3, AngularFire 2, Angular 4 and Google Firebase

After reading about 50234 different tutorials, I decided to write down the steps for a development environment on Ubuntu 16.

Install node.js. Minimum version 6. Here for Ubuntu.

The first two commands to reset/cleanup your global installation. Also make sure, there is no old binary in /usr/local/bin/.

$ sudo aptitude purge nodejs
$ sudo rm -r /usr/lib/node_modules
$ curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
$ sudo aptitude install nodejs

Also install the SDKs for your target platforms. Android or iOS.

Install the needed packages (-g for global installation).
$ sudo npm install -g ionic@latest cordova typescript typings

Your system should look anything like this:

$ ionic info
Your system information:
Cordova CLI: 7.0.0
Ionic CLI Version: 2.2.3
Ionic App Lib Version: 2.2.1
ios-deploy version: Not installed
ios-sim version: Not installed
OS: Linux 4.4
Node Version: v6.10.3
Xcode version: Not installed

Create a new Ionic project. This will also create the project folder.
$ ionic start hybridapp blank --v2

where hybridapp is the name of the app and blank is a starter template layout (See $ ionic start –list for more examples). –v2 is needed to build the newest ionic version. Change into the project folder. This is your home now.

Install the needed npm packages (inside your project folder).

$ npm install firebase angularfire2 –save

Following this guide, I edited the package.json file. Not needed when using the correct ionic cli version.

Go to https://firebase.google.com/ and create an account. Create a new project and select „Add Firebase to your web app“. Edit the file src/app/app.module.ts and add your details.


import { BrowserModule } from '@angular/platform-browser';
import { ErrorHandler, NgModule } from '@angular/core';
import { IonicApp, IonicErrorHandler, IonicModule } from 'ionic-angular';

import { AngularFireModule } from 'angularfire2';

import { MyApp } from './app.component';
import { HomePage } from '../pages/home/home';
import { ListPage } from '../pages/list/list';

import { StatusBar } from '@ionic-native/status-bar';
import { SplashScreen } from '@ionic-native/splash-screen';

export const firebaseConfig = {
apiKey: "your api key",
authDomain: "your domain",
databaseURL: "your url",
storageBucket: "your id",
messagingSenderId: "your id"
};

@NgModule({
declarations: [
MyApp,
HomePage,
ListPage
],
imports: [
BrowserModule,
IonicModule.forRoot(MyApp),
AngularFireModule.initializeApp(firebaseConfig)
],
bootstrap: [IonicApp],
entryComponents: [
MyApp,
HomePage,
ListPage
],
providers: [
StatusBar,
SplashScreen,
{provide: ErrorHandler, useClass: IonicErrorHandler}
]
})
export class AppModule {}

Try starting the setup with

$ npm run ionic:serve

If everything works, backup your files, commit to your git so you have a working base installation.

You can now start developing. Have fun.

Turn-by-turn navigation with a single click

I was always looking to start a navigation using Google Maps on Android with a just a single click. Of course you can set some presets or even a location, and since a few generations of Google now, there is even a navigation shortcut for your work or home address. But one thing, all these methods were missing: A real one click solution. Every shortcut only gets you to the calculated route, you always have to start the real navigation with another click.
Using tasker, I found a way to finally realize what i needed.
Tasker has support for intents and using the Google Maps-Intent works like a charm.

Create a new task.
Add System -> Send Intent
Action: android.intent.action.VIEW
Type: None
Data: google.navigation:q=location+you+look+for&mode=w

modes:

  • d for car (drive)
  • w for walk
  • b for bike

Take a look the the manual page for all options.

Trakt.Watch 1.2 update – Loading screen, watchlist

I updated the pebble watchapp to version 1.2 and added two new features:

  • Loading screen
  • Watchlist

Since most entries require a web request to get the relevant data, there is always some loading time. Depending on your bluetooth connection to your phone and your phone’s network connection, this can take a few seconds. To make it visible, that the app is working and processing your request, I added a loading screen when making a web request.

I added a new menu entry: watchlist. At the moment the menu only shows the episodes on your watchlist. I’m planing to add shows and to mark them as watched.

I would love to hear some user feedback and your feature requests.

 

Trakt.Watch v1.1 for pebble released

I got my Pebble 2 a few weeks ago and decided to tinker a little bit with the SDK. Since I’m using trakt.tv to track my tv shows and wanted to try a few things, I build a watchapp for my pebble and released it today.

At the moment the app is able to:

  • authenticate against the trakt.tv api using the API (using the watchapp configuration dialog)
  • show your „on deck“ episodes
  • add the episode to you watch history (with current timestamp)
  • show your history of watched shows
  • „unwatch“ an episode
  • show basic user information
  • show auth information

Maybe I will add some more features in the future.

You can find more info about the app here: https://qstracker.com/traktwatch/. You’re welcome to try the watchapp and leave some comment.

The process of build the app and all other things needed to operate this (oauth handler etc) was quite fun and I’m thinking about porting this for some other APIs.

Fastes way to create a good looking webpage with node.js

If you need a responsive website to showcase something, here is a really fast way to do so.

Download a design you like from html5up.net.

$ mkdir node-html5
$ cd node-html5
$ npm init (just accept default settings)
$ npm install express --save

Extract the content of the design pack into the directory. Create index.js in the directory with the following content:

var express = require('express')
var path = require('path')
var PORT = 60000
var app = express()

app.use('/assets', express.static('assets'))
app.use('/images', express.static('images'))

app.get('/', function (req, res) {
res.sendFile(path.join(__dirname + '/index.html'))
})

app.listen(PORT, function () {
console.log('Listening on port: ' + PORT)
})

The directory should look like this:

../node-html5/
├── assets
├── images
├── index.html
├── index.js
├── node_modules
├── package.json

Now edit the html, css and images like you want to.
Start the webserver with

$ node index.js

and point your browser to http://localhost:60000.

To deploy this setup e.g. to uberspace, see this post.

WordPress: HTTP Error 500 after plugin activation

After activating a wordpress plugin, the whole website went offline and the webserver showed an http error 500. You have to deactivate the plugin to access the site and the wp-admin interface again. You can do this either by editing the mysql database table or much simpler: By temporarily moving the plugins folder!
Just move the folder plugins in wp-contents to plugins.old, create an empty folder plugins, refresh the dashboard and click on Plugins. Remove the folder and move your plugins.old back to plugins. You have to activate all plugins again after this.

Source: https://codex.wordpress.org/FAQ_Troubleshooting#How_to_deactivate_all_plugins_when_not_able_to_access_the_administrative_menus.3F

node.js webserver auf einem uberspace einrichten

Ich wollte mich schon länger mal mit node.js beschäftigen und einen einfachen Webdienst damit bauen. Es gibt im uberspace wiki einen kurzen Artikel zu nodejs und auch alle weiteren benötigten Informationen findet man dort irgendwo. Aber man muss sich doch alles zusammen suchen. Deswegen hiermal alle Punkte zusammengefasst, die man beachten muss, um einen einfachen Prototyp bei den ubernauten zu betreiben.

node.js Version

Mit dem Befehl

$ node -v
v0.10.41

kann man sich die aktuell verwendete nodejs Version anzeigen lassen. Diese ist im Normalfall sehr alt. Um beim Aufruf von node eine aktuelle Version zu verwenden, ändern wir den Pfad. Alle verfügbaren installierten Versionen kann man sich anzeigen lassen:

$ ls -ld /package/host/localhost/nodejs-*

In der ~/.bash_profile fügt man in der letzten Zeile die gewünschte Version ein. z.B. 6

$ export PATH=/package/host/localhost/nodejs-6/bin:$PATH

Umgebungsvariable mit source ~/.bash_profile neuladen. Anschließend sollte die korrekte Version benutzt werden.

$ node -v
v6.9.1

Einfacher node.js webserver

Da wir nicht die einzigen auf dem uberspace host sind und die gängigen Ports 80 und 443 bereits vom Apachen verwendet werden, müssen wir unseren node.js server auf einem anderen Port starten und leiten dann die Anfragen an Port 80 an diesen um. Der Port sollte zwischen 61000 und 65535 liegen. Wir suchen uns einen aus (hier 61003) und überprüfen, ob dieser schon belegt ist:

$ /usr/sbin/ss -ln | fgrep 61003

Bekommen wir keinen Eintrag angezeigt, können wir diesen Port verwenden. Die Weiterleitung richten wir mit einer .htaccess Datei in ~/html/ ein:

RewriteEngine On
RewriteRule ^(.*) http://localhost:61003/$1 [P]

Jetzt können wir loslegen. Wir wechseln in unseren DocumentRoot und installieren das node express.js modul für einen Webserver.

$ cd ~/html
$ npm install express

Als nächstes erstellen wir die Datei index.js mit folgenden Inhalt (den Port entsprechend anpassen):

var express = require('express');

var app = express();
app.set('port', (process.env.PORT || 61003));

app.get('/', function (req, res) {
  res.send('Hello World!')
})

var server = app.listen(app.get('port'), function () {
 console.log('Started on port %s', app.get('port'));
});

Der Webserver kann nun testweise mit

node index.js

gestartet werden und ein Aufruf der eigenen uberspace url im Browser sollte ein „Hello World!“ anzeigen. Schon fertig!

Naja, fast. Wir möchten ja, dass der Dienst zukünftig auch läuft, wenn wir nicht angemeldet sind. Also müssen wir unseren Server nun als Dienst einrichten.

Als Dienst einrichten

Dem uberspace wiki folgend erstellen wir als nächstes einen Dienst. Dafür zuerst den supervisor aktivieren und anschließend den Dienst hinzufügen. Hier heißt dieser einfach nodetest. 

$ test -d ~/service || uberspace-setup-svscan 
$ uberspace-setup-service nodetest node ~/html/index.js

Dem tool uberspace-setup-service übergibt man als 1. Parameter den Name des neuen Dienstes (nodetest) und als 2. Parameter das Kommando (node ~/html/index.js). Das Ergebnis schauen wir uns gleich an. Solltet ihr das ganze testen wollen oder geht irgendetwas schief, könnt ihr so den Dienst wieder löschen:

$ cd ~/service/nodetest
$ rm ~/service/nodetest
$ svc -dx . log
$ rm -rf ~/etc/run-nodetest

Für jeden Dienst wird in ~/service ein eigener Unterordner erzeugt. Hier liegt die run Datei, welche den eigentlich Aufruf beinhaltet und die Logdateien. Seit ihr dem Artikel genau gefolgt, sollte die letzte Zeile mit dem Aufruf der ~/service/nodetest/run so aussehen:

exec /package/host/localhost/nodejs-6/bin/node /home/user/html/index.js 2>&1

Das node als 2. Parameter wurde durch den gesammten Pfad ersetzt. Möchte man das nicht und will lieber immer die eingestellte Version verwenden, den Pfad entfernen und nur node verwenden.

Gesteuert wird der Dienst mit dem tool svc. Die wichtigsten Parameter:

-u up, also Dienst starten
-d down, also Dienst beenden
-h hup, ein HUP-Signal senden (Reload)

Ein Neustart des Dienstes sähe z.B. so aus:

$ svc -du ~/service/nodetest

Logging

Das entsprechende Log des Dienstes können wir recht einfach lesen. Mit dieser kleinen Funktion in der ~/.bashrc wird es aber noch einfacher:

readlog()
{
        if [ -n "$1" ]; then
                zcat -f ~/service/$1/log/main/* | tai64nlocal | less;
        else
                echo "Usage: readlog <daemonname>";
        fi;
}

Der Aufruf geschieht dann mit:

$ readlog nodetest

COPS – Another OPDS catalog

The setup using the owncloud app described here works really well. Unless you want to share your books and catalog with someone else and you use the owncloud user also for other stuff and files. Of course it would be possible to create a special books-user and share the folder with other users etc., but this is to complex for my single user installation. Looking for a ebook reader addon, I found COPS – Calibre OPDS (and HTML) PHP Server. COPS generates an OPDS catalog using multiple sorting features and provides a search function. It also includes an ebook reader.

Install some needed packages.

sudo aptitude install php5-gd php5-sqlite php5-json php5-intl

Download the latest version from github.
I created a new subfolder in the webserver’s document root under /var/www/cops/ and extracted the files.

Copy the example configuration.

sudo cp /var/www/cops/config_local.php.example /var/www/cops/config_local.php

Edit the config file and change the path to your ebook directory containing the metadata.db from calibre.

$config['calibre_directory'] = '/media/usb/owncloud/user/files/ebooks/';

Edit your nginx configuration to password-protect your book collection. Add the section to your server configuration.

location /cops {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
}

Generate the .htpasswd file with your tool of choice. For testing use an online generator.

Point your browser to the encrypted SSL version of your url like https://yourip/cops. It should ask for a username and password and after correct credentials, show you your collection. To use the catalog with an app like FBReader, you need to apend feed.php to the url like https://yourip/cops/feed.php.cops

cops2

 

Replace harddisk to grow raid and lvm volume

I ran out of disk space on a 2 disk raid mirror. I already replaced one of the harddisks with a bigger 4TB one. The size doesn’t allow for MBR anymore and I needed to switch to GPT. The now smaller drive is also to replaced. Here are some of my notes for the procedure for later use. In the end I didn’t use this guide. I had a good backup and some time on the weekend as nobody needed the server, so I opted for the live migration. Since I already wrote most of the steps down, I will keep it and just add some notes at the end.

Mark the smaller disk as failed and remove it from the array.
mdadm --manage /dev/md0 --fail /dev/sda1
mdadm --manage /dev/md1 --fail /dev/sda2
mdadm --manage /dev/md2 --fail /dev/sda3
cat /proc/mdstat
mdadm --manage /dev/md0 --remove /dev/sda1
mdadm --manage /dev/md1 --remove /dev/sda2
mdadm --manage /dev/md2 --remove /dev/sda3

Shutdown the system, replace harddisk with new one and boot a live system. Install the needed packages.
aptitude install mdadm gdisk
modprobe raid1

Start raid.
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

Clone the GPT partition schema to the new disk.
sgdisk --backup=table /dev/sdb
sgdisk --load-backup=table /dev/sda
sgdisk -G /dev/sda

Add to raid
mdadm --manage /dev/md0 --add /dev/sda1
mdadm --manage /dev/md1 --add /dev/sda2
mdadm --manage /dev/md2 --add /dev/sda3

Synchronisation starts. You can watch the process with
watch cat /proc/mdstat

Expand the raid to the new maximum size.
mdadm --grow /dev/md2 --size=max

Grow the LVM.
pvresize /dev/md2
lvextend -L +1TB /dev/mapper/deb7-home
resize2fs /dev/mapper/deb7-home
grub-install /dev/sdc --recheck

Reboot.

Manual integrity-check of raid.
/usr/share/mdadm/checkarray /dev/md0
/usr/share/mdadm/checkarray /dev/md1
/usr/share/mdadm/checkarray /dev/md2

 

Alternative: Live migration

Live migration is nearly the same, but you don’t have to reboot the system.

Hotplug the new (third) drive to your system. If the Sata-controller is set to AHCI mode, the system should recognize the new drive.

After cloning the partition table with sgdisk, add the drive to the raid.
mdadm /dev/md0 --manage --add /dev/sdc1
mdadm /dev/md1 --manage --add /dev/sdc2
mdadm /dev/md2 --manage --add /dev/sdc5

Grow the raid to 3 devices and let it recover.
mdadm /dev/md0 --grow -n3
mdadm /dev/md1 --grow -n3
mdadm /dev/md2 --grow -n3

Mark the to-be-replaced drive as failed and remove it from the raid array.
mdadm /dev/md0 --manage -f /dev/sda1 -r /dev/sda1
mdadm /dev/md1 --manage -f /dev/sda2 -r /dev/sda2
mdadm /dev/md2 --manage -f /dev/sda3 -r /dev/sda3

Shrink the array again to 2 drives.
mdadm /dev/md0 --grow -n2
mdadm /dev/md1 --grow -n2
mdadm /dev/md2 --grow -n2

Grow the raid and extend pv, lv and filesystem like above.