Schlagwort-Archive: debian

Root login on debian mariadb

Debian switched to mariadb by default. On a new installation you can login to your mysql database as root just by executing the mysql command without a password.

root@host:~# mysql
Welcome to the MariaDB monitor. Commands end with ; or \g.
Your MariaDB connection id is 56339
Server version: 10.1.26-MariaDB-0+deb9u1 Debian 9.1

Copyright (c) 2000, 2017, Oracle, MariaDB Corporation Ab and others.

Type ‚help;‘ or ‚\h‘ for help. Type ‚\c‘ to clear the current input statement.

MariaDB [(none)]>

This is or can be a nice security feature. Connecting as root is only allowed by the root system user through the mysql unix socket. On the other hand you may have some software which needs the root user to access the database server. In this case you need to tweak some user settings.

First, set a password.

MariaDB [(none)]> update mysql.user set password=password(‚geheim‘) where user=’root‘;
MariaDB [(none)]> flush privileges;

Next, change the connection type.

MariaDB [(none)]> update mysql.user set plugin=“ where user=’root‘;
MariaDB [(none)]> flush privileges;

Downside seems to be the init process and autostart procedure. If you set a password for the root account, you need to add it to your /etc/mysql/debian.cnf in cleartext. Maybe you might want to consider this. Better choice would be to create a second privileged user.

COPS – Another OPDS catalog

The setup using the owncloud app described here works really well. Unless you want to share your books and catalog with someone else and you use the owncloud user also for other stuff and files. Of course it would be possible to create a special books-user and share the folder with other users etc., but this is to complex for my single user installation. Looking for a ebook reader addon, I found COPS – Calibre OPDS (and HTML) PHP Server. COPS generates an OPDS catalog using multiple sorting features and provides a search function. It also includes an ebook reader.

Install some needed packages.

sudo aptitude install php5-gd php5-sqlite php5-json php5-intl

Download the latest version from github.
I created a new subfolder in the webserver’s document root under /var/www/cops/ and extracted the files.

Copy the example configuration.

sudo cp /var/www/cops/config_local.php.example /var/www/cops/config_local.php

Edit the config file and change the path to your ebook directory containing the metadata.db from calibre.

$config['calibre_directory'] = '/media/usb/owncloud/user/files/ebooks/';

Edit your nginx configuration to password-protect your book collection. Add the section to your server configuration.

location /cops {
auth_basic "Restricted";
auth_basic_user_file /etc/nginx/.htpasswd;
}

Generate the .htpasswd file with your tool of choice. For testing use an online generator.

Point your browser to the encrypted SSL version of your url like https://yourip/cops. It should ask for a username and password and after correct credentials, show you your collection. To use the catalog with an app like FBReader, you need to apend feed.php to the url like https://yourip/cops/feed.php.cops

cops2

 

Replace harddisk to grow raid and lvm volume

I ran out of disk space on a 2 disk raid mirror. I already replaced one of the harddisks with a bigger 4TB one. The size doesn’t allow for MBR anymore and I needed to switch to GPT. The now smaller drive is also to replaced. Here are some of my notes for the procedure for later use. In the end I didn’t use this guide. I had a good backup and some time on the weekend as nobody needed the server, so I opted for the live migration. Since I already wrote most of the steps down, I will keep it and just add some notes at the end.

Mark the smaller disk as failed and remove it from the array.
mdadm --manage /dev/md0 --fail /dev/sda1
mdadm --manage /dev/md1 --fail /dev/sda2
mdadm --manage /dev/md2 --fail /dev/sda3
cat /proc/mdstat
mdadm --manage /dev/md0 --remove /dev/sda1
mdadm --manage /dev/md1 --remove /dev/sda2
mdadm --manage /dev/md2 --remove /dev/sda3

Shutdown the system, replace harddisk with new one and boot a live system. Install the needed packages.
aptitude install mdadm gdisk
modprobe raid1

Start raid.
mdadm --examine --scan >> /etc/mdadm/mdadm.conf

Clone the GPT partition schema to the new disk.
sgdisk --backup=table /dev/sdb
sgdisk --load-backup=table /dev/sda
sgdisk -G /dev/sda

Add to raid
mdadm --manage /dev/md0 --add /dev/sda1
mdadm --manage /dev/md1 --add /dev/sda2
mdadm --manage /dev/md2 --add /dev/sda3

Synchronisation starts. You can watch the process with
watch cat /proc/mdstat

Expand the raid to the new maximum size.
mdadm --grow /dev/md2 --size=max

Grow the LVM.
pvresize /dev/md2
lvextend -L +1TB /dev/mapper/deb7-home
resize2fs /dev/mapper/deb7-home
grub-install /dev/sdc --recheck

Reboot.

Manual integrity-check of raid.
/usr/share/mdadm/checkarray /dev/md0
/usr/share/mdadm/checkarray /dev/md1
/usr/share/mdadm/checkarray /dev/md2

 

Alternative: Live migration

Live migration is nearly the same, but you don’t have to reboot the system.

Hotplug the new (third) drive to your system. If the Sata-controller is set to AHCI mode, the system should recognize the new drive.

After cloning the partition table with sgdisk, add the drive to the raid.
mdadm /dev/md0 --manage --add /dev/sdc1
mdadm /dev/md1 --manage --add /dev/sdc2
mdadm /dev/md2 --manage --add /dev/sdc5

Grow the raid to 3 devices and let it recover.
mdadm /dev/md0 --grow -n3
mdadm /dev/md1 --grow -n3
mdadm /dev/md2 --grow -n3

Mark the to-be-replaced drive as failed and remove it from the raid array.
mdadm /dev/md0 --manage -f /dev/sda1 -r /dev/sda1
mdadm /dev/md1 --manage -f /dev/sda2 -r /dev/sda2
mdadm /dev/md2 --manage -f /dev/sda3 -r /dev/sda3

Shrink the array again to 2 drives.
mdadm /dev/md0 --grow -n2
mdadm /dev/md1 --grow -n2
mdadm /dev/md2 --grow -n2

Grow the raid and extend pv, lv and filesystem like above.

Raspberry Pi: Owncloud setup revisited

The Raspberry and owncloud ran for a few months now and I really enjoyed my own personal cloud. But I was really annoyed by the poor performance. One possible solution was to switch the sd card, which I did. I replaced the Transcend 16GB SDHC card with a 4GB one. Performance is much better now. Since setting up the system is a pretty simple and fast process, I didn’t bother about cloning the card etc. I reinstalled raspbian and followed my own guide on how to setup nginx and php and oriented on my other tutorial on how to install owncloud 6 beta. Of course I needed to change some links etc.

Some more things (I) changed:

  1. owncloud added security for trusted domains
  2. moved owncloud storage to an external usb drive
  3. changed the nginx webserver configuration: restrict to https only and …
  4. accessing php-fpm through network socket

 

1. If you access the webinterface of your owncloud instance using different ips, names etc., you need to add them to the „trusted_domains“ parameter.

pi@raspberrypi ~ $ sudo vi /var/www/owncloud/config/config.php

‚trusted_domains‘ =>
array (
0 => ‚192.168.12.34′,
1 => ‚your.dyndns.org‚,
),

2. Connect the usb drive and use lsblk and blkid to find the needed UUID.

pi@raspberrypi ~ $ lsblk && blkid
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 2,7T 0 disk
└─sda1 8:1 0 2,7T 0 part /media/usb
mmcblk0 179:0 0 3,7G 0 disk
├─mmcblk0p1 179:1 0 56M 0 part /boot
└─mmcblk0p2 179:2 0 3,7G 0 part /
/dev/mmcblk0p1: SEC_TYPE=“msdos“ LABEL=“boot“ UUID=“7D5C-A285″ TYPE=“vfat“
/dev/mmcblk0p2: UUID=“5d18be51-3217-4679-9c72-a54e0fc53d6b“ TYPE=“ext4″
/dev/sda1: LABEL=“Backup3TB“ UUID=“1D3F163D4EEC069E“ TYPE=“ntfs“

Create the mountpoint /media/usb and edit /etc/fstab to mount the drive on startup.

pi@raspberrypi ~ $ sudo mkdir /media/usb

pi@raspberrypi ~ $ sudo vi /etc/fstab
proc /proc proc defaults 0 0
/dev/mmcblk0p1 /boot vfat defaults 0 2
/dev/mmcblk0p2 / ext4 defaults,noatime 0 1
UUID=1D3F163D4EEC069E /media/usb ntfs-3g defaults,auto, uid=pi,gid=wwwdata,umask=007,users 0 0

While setting up your owncloud, you can now define /media/usb as your data storage. Not sure if there is a way to change this on a already running owncloud setup.

 

3. Change the nginx configuration (/etc/nginx/sites-availabe/default) according to the owncloud 6 documentation

upstream php-handler {
server 127.0.0.1:9000;
}

server {
listen 80;
return 301 https://your.dyndns.org$request_uri; # enforce https
}

# HTTPS server
#
server {
listen 443 ssl;
server_name your.dyndns.org localhost;

root /var/www;

autoindex off;
index index.php index.html index.htm;

ssl on;
ssl_certificate /etc/nginx/conf.d/ssl/server.crt;
ssl_certificate_key /etc/nginx/conf.d/ssl/server.key;

client_max_body_size 10G; # set max upload size
fastcgi_buffers 64 4K;

rewrite ^/caldav(.*)$ /remote.php/caldav$1 redirect;
rewrite ^/carddav(.*)$ /remote.php/carddav$1 redirect;
rewrite ^/webdav(.*)$ /remote.php/webdav$1 redirect;

index index.php;
error_page 403 /core/templates/403.php;
error_page 404 /core/templates/404.php;

location = /robots.txt {
allow all;
log_not_found off;
access_log off;
}

location ~ ^/(data|config|\.ht|db_structure\.xml|README) {
deny all;
}

location / {
# The following 2 rules are only needed with webfinger
rewrite ^/.well-known/host-meta /public.php?service=host-meta last;
rewrite ^/.well-known/host-meta.json /public.php?service=host-meta-json last;

rewrite ^/.well-known/carddav /remote.php/carddav/ redirect;
rewrite ^/.well-known/caldav /remote.php/caldav/ redirect;

rewrite ^(/core/doc/[^\/]+/)$ $1/index.html;

try_files $uri $uri/ index.php;
}

location ~ ^(.+?\.php)(/.*)?$ {
try_files $1 =404;

include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$1;
fastcgi_param PATH_INFO $2;
fastcgi_param HTTPS on;
fastcgi_pass php-handler;
}

# Optional: set long EXPIRES header on static assets
location ~* ^.+\.(jpg|jpeg|gif|bmp|ico|png|css|js|swf)$ {
expires 30d;
# Optional: Don’t log access to assets
access_log off;
}
}

4. Modify the php5-fpm config to listen on a netsocket.

 pi@raspberrypi ~ $ sudo vi /etc/php5/fpm/pool.d/www.conf

;listen = /var/run/php5-fpm.sock
listen = 127.0.0.1:9000

Restart the services.

pi@raspberrypi ~ $ sudo service php5-fpm restart
pi@raspberrypi ~ $ sudo service nginx restart

 

 

 

 

Raspberry Pi: nginx Webserver with PHP and SSL

Installing the needed packages:

pi@raspberrypi ~ $ sudo aptitude install nginx php5-fpm php5-cgi php5-cli php5-common

There are different version of nginx available. For a comparison take a look at the debian wiki: https://wiki.debian.org/Nginx

Create the needed directory and php test file for later:

pi@raspberrypi ~ $ sudo mkdir /var/www
pi@raspberrypi ~ $ echo „<?php phpinfo(); ?>“ | sudo  tee /var/www/index.php
pi@raspberrypi ~ $ sudo chown -R www-data:www-data /var/www

Setting up SSL:

pi@raspberrypi ~ $ sudo mkdir /etc/nginx/conf.d/ssl && cd /etc/nginx/conf.d/ssl
pi@raspberrypi /etc/nginx/conf.d/ssl $ sudo openssl genrsa -out server.key 2048
pi@raspberrypi /etc/nginx/conf.d/ssl $ sudo openssl req -new -key server.key -out server.csr
pi@raspberrypi /etc/nginx/conf.d/ssl $ sudo openssl x509 -req -days 365 -in server.csr -signkey server.key -out server.crt

And finally configure nginx:

pi@raspberrypi /etc/nginx/conf.d/ssl $ sudo vi /etc/nginx/sites-available/default

server {
listen 80;
root /var/www;
index index.php index.html index.htm;
server_name localhost;
location ~ .php$ {
fastcgi_split_path_info ^(.+.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
try_files $uri $uri/ /index.html;
}
}

# HTTPS server
#
server {
listen 443;
server_name localhost;
root /var/www;
autoindex on;
index index.php index.html index.htm;
ssl on;
ssl_certificate /etc/nginx/conf.d/ssl/server.crt;
ssl_certificate_key /etc/nginx/conf.d/ssl/server.key;
location ~ .php$ {
fastcgi_split_path_info ^(.+.php)(/.+)$;
fastcgi_pass unix:/var/run/php5-fpm.sock;
fastcgi_index index.php;
include fastcgi_params;
try_files $uri $uri/ /index.html;
}
}

Visit http://localhost or https://localhost and you should see your php configuration.

Restricted sftp access with rssh and ssh chroot

OpenSSH 4.9 was the first version of the famous daemon that came with an built-in chroot functionality (changelog). Chrooting the sshd and restricting the shell access to a few commands can be a great solution to grant a few users secure access to exchange files. We will use the rssh shell to only allow sftp access for one user, locked to his chrooted home directory. Since it is dangerous to give a user write access to the root of a chroot, you have to create the user’s home directory inside the chroot. In this example /home/ftp will be the chroot and /home/ftp/secftp is the home directory of the user, the place where he finds himself when connecting to the machine.

Install the rssh shell with

$ aptitude install rssh

and adjust the config file for the user secftp to allow sftp access.

$ vim /etc/rssh.conf

user=secftp:027:00010 #user:umask:proto

Then add the new user secftp (with /secftp as home and /usr/bin/rssh as shell) to your system and set a password.

$ useradd -d /secftp -s /usr/bin/rssh -g users secftp
$ passwd secftp

Create the directory and adjust the ownership so secftp can read/write and other group members can read the uploaded files.

$ mkdir -p /home/ftp/secftp
$ chown secftp:users /home/ftp/secftp

Edit your sshd configuration and add the user specific options for your chroot. Don’t forget to add secftp to your AllowUsers (which you should have configured :)).

$ vim /etc/ssh/sshd_config

AllowUsers secftp

Subsystem sftp internal-sftp

Match User secftp
   ChrootDirectory /home/ftp
   AllowTCPForwarding no
   X11Forwarding no
   ForceCommand internal-sftp

Restart the sshd daemon and you should be done.

Sources:
http://www.gossamer-threads.com/lists/openssh/dev/44657
http://hp.kairaven.de/scpsftp/ssh-rssh-sftp.html
http://www.debian-administration.org/articles/590

Backup von MySQL Datenbanken

Warum das Rad neu erfinden, wenn es schon ein zuverlässiges Skript zum automatischen Sichern der MySQL Datenbanken gibt? WipeOut’s – Automatic MySQL Backup
Aktuelle Version von Sourceforge runterladen und einige Informationen eintragen.

USERNAME=wordpressBackup
PASSWORD=P@ssw0rd
DBHOST=localhost
DBNAMES=“wordpress“
BACKUPDIR=“/var/backups/db“

Mit der Option DBNAMES ist es möglich einzelne Datenbank aufzulisten oder mittels „ALL“ alle zu sichern. Dies bietet sich z.B. an, wenn man die Rechte des Backup-Users auf die jeweilige Datenbank beschränken möchte.
Neben einigen erweiterten Optionen bietet das Skript die Möglichkeit einen Bericht oder die Sicherungen an eine angegebene E-Mail Adresse zu schicken.

MAILCONTENT=“files“
MAILADDR=“root@carrier-lost.org“

Das angepasste Skript zum Abschluss noch ins /etc/cron.daily/ Verzeichnis kopieren und testweise ausführen.

syslog-ng on vServer with Debian Lenny

Starting syslog-ng on a vServer with Debian Lenny fails with the message:

/etc/init.d/syslog-ng restart
Starting system logging: syslog-ng
Error opening file for reading; filename=’/proc/kmsg’, error=’Operation not permitted (1)’
Error initializing source driver; source=’s_all’ failed!
Error initializing source driver; source=’s_all’

You have to comment out a few lines in /etc/syslog-ng/syslog-ng.conf since syslog-ng doesn’t have direct access on the kernel messages. Under „Sources“

file(”/proc/kmsg” log_prefix(”kernel: “));

and

# kern.* -/var/log/kern.log
log {
source(s_all);
filter(f_kern);
destination(df_kern);
};

Syslog-ng should start just fine now.