• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • Create a VM ($25 Credit)
  • Buy a Domain
  • 1 Month free Back Blaze Backup
  • Other Deals
    • Domain Email
    • Nixstats Server Monitoring
    • ewww.io Auto WordPress Image Resizing and Acceleration
  • About
  • Links

IoT, Code, Security, Server Stuff etc

Views are my own and not my employer's.

Personal Development Blog...

Coding for fun since 1996, Learn by doing and sharing.

Buy a domain name, then create your own server (get $25 free credit)

View all of my posts.

  • Cloud
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to buy a new domain and SSL cert from NameCheap, a Server from Digital Ocean and configure it.
    • Setting up a Vultr VM and configuring it
    • All Cloud Articles
  • Dev
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to setup pooled MySQL connections in Node JS that don’t disconnect
    • NodeJS code to handle App logins via API (using MySQL connection pools (1000 connections) and query parameters)
    • Infographic: So you have an idea for an app
    • All Development Articles
  • MySQL
    • Using the free Adminer GUI for MySQL on your website
    • All MySQL Articles
  • Perf
    • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Measuring VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 1 of 4
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Setting up a website to use Cloudflare on a VM hosted on Vultr and Namecheap
    • All Performance Articles
  • Sec
    • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
    • Using OWASP ZAP GUI to scan your Applications for security issues
    • Setting up the Debian Kali Linux distro to perform penetration testing of your systems
    • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
    • PHP implementation to check a password exposure level with Troy Hunt’s pwnedpasswords API
    • Setting strong SSL cryptographic protocols and ciphers on Ubuntu and NGINX
    • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
    • All Security Articles
  • Server
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All Server Articles
  • Ubuntu
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Useful Linux Terminal Commands
    • All Ubuntu Articles
  • VM
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All VM Articles
  • WordPress
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
    • How to backup WordPress on a host that has CPanel
    • Moving WordPress to a new self managed server away from CPanel
    • Moving a CPanel domain with email to a self managed VPS and Gmail
    • All WordPress Articles
  • All

debian

Setup two factor authenticator protection at login on Ubuntu or Debian

October 14, 2018 by Simon

This is a quick post that shows how I set up two-factor authenticator protection at login on Ubuntu or Debian

Aside

If you have not read my previous posts I have now moved my blog to the awesome UpCloud host (signup using this link to get $25 free UpCloud VM credit). I compared Digital Ocean, Vultr and UpCloud Disk IO here and UpCloud came out on top by a long way (read the blog post here). Here is my blog post on moving from Vultr to UpCloud.

Buy a domain name here

Domain names for just 88 cents!

Now on with the post.

Backup

I ensured I had a backup of my server. This is easy to do on UpCloud. If something goes wrong I will rollback.

Sever Backup Confirmed

Why Setup 2FA on SSH connections

1) Firewalls or whitelists may not protect you from detection.

2) SSH authorisation bypass bugs may appear.

I’ve just relased libssh 0.8.4 and 0.7.6 to address CVE-2018-10933. This is an auth bypass in the server. Please update as soon as possible! https://t.co/Qhra2TXqzm

— Andreas Schneider (@cryptomilk) October 16, 2018

2FA authorisation is another lube of defence.

Yubico Yubi Key

Read my block post here to learn how to use the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software

Timezone

It is important that you set the same timezone as the server you are trying to secure two 2FA. I can run this command on Linux to set the timezone.

On Debian, I set the time using this guide.

dpkg-reconfigure tzdata

Check the time command

> timedatectl
> Local time: Tue 2019-06-25 16:45:20 UTC
> Universal time: Tue 2019-06-25 16:45:20 UTC
> RTC time: Wed 2019-06-26 02:37:44
> Time zone: Etc/UTC (UTC, +0000)
> Network time on: yes
> NTP synchronized: yes
> RTC in local TZ: no

sudo hwclock --show

I set the timezone

> sudo timedatectl set-timezone Australia/Sydney

I confirmed the timezone

> timedatectl
> Local time: Wed 2019-06-26 02:47:42 AEST
> Universal time: Tue 2019-06-25 16:47:42 UTC
> RTC time: Wed 2019-06-26 02:40:06
> Time zone: Australia/Sydney (AEST, +1000)
> Network time on: yes
> NTP synchronized: yes
> RTC in local TZ: no

I installed a npt time server

I followed this guide to install an NTP time server (failed at: ntpdate linuxconfig.ntp) and this guide to manually sync

I installed the Google Authenticator app

sudo apt install libpam-google-authenticator
sudo apt-get install libpam-google-authenticator

Configure Google Authenticator

Run google-authenticator and answer the following questions

Q1) Do you want authentication tokens to be time-based (y/n): Y

You will be presented with a token you can add to the Yubico Authenticator or other authenticator apps,

2FA Code

TIP: Write down any recovery codes displayed

Scan the code with your 2FA Authenticator app (e.g Google Authenticator, Yubico Authenticator or freeOTP from https://freeotp.github.io)

Scan 2FA Code

The 2FA code is now available for use in my YubiCo Authenticator app

Authenticator App Ready

Q2) Do you want me to update your “/root/.google_authenticator” file? (y/n): Y

Q3) Do you want to disallow multiple uses of the same authentication
token? This restricts you to one login about every 30s, but it increases
your chances to notice or even prevent man-in-the-middle attacks (y/n): Y

Q4) By default, a new token is generated every 30 seconds by the mobile app.
In order to compensate for possible time-skew between the client and the server,
we allow an extra token before and after the current time. This allows for a
time skew of up to 30 seconds between the authentication server and client. If you
experience problems with poor time synchronization, you can increase the window
from its default size of 3 permitted codes (one previous code, the current
code, the next code) to 17 permitted codes (the 8 previous codes, the current
code, and the 8 next codes). This will permit for a time skew of up to 4 minutes
between client and server.
Do you want to do so? (y/n) y: Y

Q5) If the computer that you are logging into isn’t hardened against brute-force login attempts, you can enable rate-limiting for the authentication module. By default, this limits attackers to no more than 3 login attempts every 30s.
Do you want to enable rate-limiting? (y/n): Y

Review Google Authenticator Config

sudo nano ~/.google_authenticator

You can change this if need be.

sudo nano ~/.google_authenticator

Edit SSH Configuration (Authentication)

sudo nano /etc/pam.d/sshd

Add the line below the line “@include common-auth”

auth required pam_google_authenticator.so

Comment out the following line (this is the most important step, this forces 2FA)

#@include common-auth

Edit SSH Configuration (Challenge Response Authentication)

Edit the ssh config file.

sudo nano /etc/ssh/sshd_config

Search For

ChallengeResponseAuthentication

Set this to

yes

Ensure the following line exists

UsePAM yes

Add the following line

AuthenticationMethods publickey,password publickey,keyboard-interactive

Edit Common Auth

sudo nano /etc/pam.d/common-auth

Add the following line before the line that says “auth [success=1 default=ignore] pam_unix.so nullok_secure”

auth required pam_google_authenticator.so

Restart the SSH service and test the codes in a new terminal before rebooting.

TIP: Do not exit the working connected session and you may need it to fix issues.

Restart the SSH service a tets it

/etc/init.d/ssh restart
[ ok ] Restarting ssh (via systemctl): ssh.service.

If you have failed to set it up authenticator codes will fail to work.

Failed attempts

Further authentication required
Using keyboard-interactive authentication.
Verification code:
Using keyboard-interactive authentication.
Verification code:
Using keyboard-interactive authentication.
Verification code:

When it is configured OK (at login SSH connection) I was prompted for further information

Further Information required
Using keyboard-interactive authentication
Verification Code: ######
[email protected]#

I am now prompted at login to insert a 2FA token (after inserting my YubiKey)

Working 2FA in Unix

Turn on 2FA on other sites

Check out https://www.turnon2fa.com and tutorials here.

I hope this guide helps someone.

Please consider using my referral code and get $25 UpCloud VM credit if you need to create a server online.

https://www.upcloud.com/register/?promo=D84793

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

V1.4 June 2019: Works on Debian 9.9

V1.3 turnon2fa.com

V1.2 ssh auth bypass

v1.1 Authenticator apps

v1.0 Initial Post

Filed Under: 2FA, 2nd Factor, Auth, Authorization, Code, Debian, Security, Ubuntu, UpCloud, Yubico, YubiKey Tagged With: app, at, authenticator, debian, factor, login, on, or, Protection, security, Setup, two, ubuntu, Yubico, YubiKey

Setup a dedicated Debian subdomain (VM), Install MySQL 14 and connect to it from a WordPress on a different VM

July 21, 2018 by Simon

This is how I set up a dedicated Debian subdomain (VM), Installed MySQL 14 and connected to it from a WordPress installation on a different VM

Aside

If you have not read my previous posts I have now moved my blog to the awesome UpCloud host (signup using this link to get $25 free UpCloud VM credit). I compared Digital Ocean, Vultr and UpCloud Disk IO here and UpCloud came out on top by a long way (read the blog post here). Here is my blog post on moving fearby.com from Vultr to UpCloud.

Buy a domain name here

Domain names for just 88 cents!

Now on with the post.

Fearby.com

I will be honest, fearby.com is my play server where I can code, learn about InfoSec and share (It’s also my stroke rehab blog).

There is no faster way to learn than actually doing. The problem is my “doing” usually breaks the live site from time to time (sorry).

I really need to set up a testing environment (DEV-TEST-LIVE or GREEN-BLUE) server(s). GREEN-BLUE has advantages as I can always have a hot spare ready. All I need to do is toggle DNS and I can set the GREEN or BLUE server as the live server.

But first  I need to separate my database from my current fearby.com server and setup up a new web server. Having a Green and Blue server that uses one database server will help with near real-time production website switches.

Dedicated Database Server

I read the following ( Should MySQL and Web Server share the same server? ) at Percona Database Performance Blog. Having a separate database server should not negatively impact performance (It may even help improve speeds).

Deploy a Debian VM (not Ubuntu)

I decided to set up a Debian server instead of Ubuntu (mostly because of the good focus on stability and focus on security within Debian).

I logged into the UpCloud dashboard on my mobile phone and deployed a Debian server in 5 mins.  I will be using my existing how to setup Ubuntu on UpCloud guide (even though this is Debian).

TIP: Sign up to UpCloud using this link to get $25 free UpCloud VM credit.

Deploy Debian Sevrer

Deploy a Debian server setup steps:

  1. Login to UpCloud and go to Create server.
  2. Name your Server (use a fully qualified domain name)
  3. Add a description.
  4. Choose your data centre (Chicago for me)
  5. Choose the server specs (1x CPU, 50GB Disk, 2GB Memory, 2TB Traffic for me)
  6. Name the Primary disk
  7. Choose an operating system (Debian for me)
  8. Select an SSH Key
  9. Choose misc settings
  10. Click Deploy server

After 5 mins your server should be deployed.

After Deploy

Setup DNS

Login to your DNS provider and create DNS records to the new IP’s (IPv4 and IPv6) provided by UpCloud. It took DNS 12 hours to replicate to my in Australia.

Add a DNS record with your domain registra A NAMe = IPV4 and AAAA Name = IPv6

Setup a Firewall (at UpCloud)

I would recommend you set up a firewall at UpCloud as soon as possible (don’t forget to add the recommended UpCloud DNS IP’s and any whitelisted IP’s your firewall).

Block everything and only allow

  • Port 22: Allow known IP(s) of your ISP or VPN.
  • Port 53: Allow known UpCloud DNS servers
  • Port 80 (ALL)
  • Port 443 (ALL)
  • Port 3306 Allow your WordPress site and known IP(s) of your ISP or VPN.

Read my post on setting up a whitelisted IP on an UpCloud VM… as it is a good idea.

UpCloud thankfully has a copy firewall feature that is very handy.

Copy Firewall rules option at UpCloud

After I set up the firewall I SSH’ed into my server (I use vSSH on OSX buy you could use PUTTY).

I updated the Debian system with the following  command

sudo apt update

Get the MySQL Package

Visit http://repo.mysql.com/ and get the URL of the latest apt-config repo deb file (e.g “mysql-apt-config_0.8.9-1_all.deb”). Make a temp folder.

mkdir /temp
cd /temp

Download the MySQL deb Package

wget http://repo.mysql.com/mysql-apt-config_0.8.9-1_all.deb

Install the package

sudo dpkg -i mysql-apt-config_0.8.9-1_all.deb

Update the system again

sudo apt update

Install MySQL on Debian

sudo apt install mysql-server
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
libaio1 libatomic1 libmecab2 mysql-client mysql-common mysql-community-client mysql-community-server psmisc
The following NEW packages will be installed:
libaio1 libatomic1 libmecab2 mysql-client mysql-common mysql-community-client mysql-community-server mysql-server psmisc
0 upgraded, 9 newly installed, 0 to remove and 1 not upgraded.
Need to get 37.1 MB of archives.
After this operation, 256 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://repo.mysql.com/apt/debian stretch/mysql-5.7 amd64 mysql-community-client amd64 5.7.22-1debian9 [8886 kB]
Get:2 http://deb.debian.org/debian stretch/main amd64 mysql-common all 5.8+1.0.2 [5608 B]
Get:3 http://deb.debian.org/debian stretch/main amd64 libaio1 amd64 0.3.110-3 [9412 B]
Get:4 http://deb.debian.org/debian stretch/main amd64 libatomic1 amd64 6.3.0-18+deb9u1 [8966 B]
Get:5 http://deb.debian.org/debian stretch/main amd64 psmisc amd64 22.21-2.1+b2 [123 kB]
Get:6 http://deb.debian.org/debian stretch/main amd64 libmecab2 amd64 0.996-3.1 [256 kB]
Get:7 http://repo.mysql.com/apt/debian stretch/mysql-5.7 amd64 mysql-client amd64 5.7.22-1debian9 [12.4 kB]
Get:8 http://repo.mysql.com/apt/debian stretch/mysql-5.7 amd64 mysql-community-server amd64 5.7.22-1debian9 [27.8 MB]
Get:9 http://repo.mysql.com/apt/debian stretch/mysql-5.7 amd64 mysql-server amd64 5.7.22-1debian9 [12.4 kB]
Fetched 37.1 MB in 12s (3023 kB/s)
Preconfiguring packages ...
Selecting previously unselected package mysql-common.
(Reading database ... 34750 files and directories currently installed.)
Preparing to unpack .../0-mysql-common_5.8+1.0.2_all.deb ...
Unpacking mysql-common (5.8+1.0.2) ...
Selecting previously unselected package libaio1:amd64.
Preparing to unpack .../1-libaio1_0.3.110-3_amd64.deb ...
Unpacking libaio1:amd64 (0.3.110-3) ...
Selecting previously unselected package libatomic1:amd64.
Preparing to unpack .../2-libatomic1_6.3.0-18+deb9u1_amd64.deb ...
Unpacking libatomic1:amd64 (6.3.0-18+deb9u1) ...
Selecting previously unselected package mysql-community-client.
Preparing to unpack .../3-mysql-community-client_5.7.22-1debian9_amd64.deb ...
Unpacking mysql-community-client (5.7.22-1debian9) ...
Selecting previously unselected package mysql-client.
Preparing to unpack .../4-mysql-client_5.7.22-1debian9_amd64.deb ...
Unpacking mysql-client (5.7.22-1debian9) ...
Selecting previously unselected package psmisc.
Preparing to unpack .../5-psmisc_22.21-2.1+b2_amd64.deb ...
Unpacking psmisc (22.21-2.1+b2) ...
Selecting previously unselected package libmecab2:amd64.
Preparing to unpack .../6-libmecab2_0.996-3.1_amd64.deb ...
Unpacking libmecab2:amd64 (0.996-3.1) ...
Selecting previously unselected package mysql-community-server.
Preparing to unpack .../7-mysql-community-server_5.7.22-1debian9_amd64.deb ...
Unpacking mysql-community-server (5.7.22-1debian9) ...
Selecting previously unselected package mysql-server.
Preparing to unpack .../8-mysql-server_5.7.22-1debian9_amd64.deb ...
Unpacking mysql-server (5.7.22-1debian9) ...
Setting up libatomic1:amd64 (6.3.0-18+deb9u1) ...
Setting up psmisc (22.21-2.1+b2) ...
Setting up mysql-common (5.8+1.0.2) ...
update-alternatives: using /etc/mysql/my.cnf.fallback to provide /etc/mysql/my.cnf (my.cnf) in auto mode
Setting up libmecab2:amd64 (0.996-3.1) ...
Processing triggers for libc-bin (2.24-11+deb9u3) ...
Setting up libaio1:amd64 (0.3.110-3) ...
Processing triggers for systemd (232-25+deb9u4) ...
Processing triggers for man-db (2.7.6.1-2) ...
Setting up mysql-community-client (5.7.22-1debian9) ...
Setting up mysql-client (5.7.22-1debian9) ...
Setting up mysql-community-server (5.7.22-1debian9) ...
update-alternatives: using /etc/mysql/mysql.cnf to provide /etc/mysql/my.cnf (my.cnf) in auto mode
Created symlink /etc/systemd/system/multi-user.target.wants/mysql.service -> /lib/systemd/system/mysql.service.
Setting up mysql-server (5.7.22-1debian9) ...
Processing triggers for libc-bin (2.24-11+deb9u3) ...
Processing triggers for systemd (232-25+deb9u4) ...

Secure MySQL

You can secure the MySQL server deployment (set options as needed)

sudo mysql_secure_installation

Enter password for user root:
********************************************
VALIDATE PASSWORD PLUGIN can be used to test passwords
and improve security. It checks the strength of password
and allows the users to set only those passwords which are
secure enough. Would you like to setup VALIDATE PASSWORD plugin?

Press y|Y for Yes, any other key for No: No
Using existing password for root.
Change the password for root ? ((Press y|Y for Yes, any other key for No) : No

... skipping.
By default, a MySQL installation has an anonymous user,
allowing anyone to log into MySQL without having to have
a user account created for them. This is intended only for
testing, and to make the installation go a bit smoother.
You should remove them before moving into a production
environment.

Remove anonymous users? (Press y|Y for Yes, any other key for No) : Yes
Success.

Normally, root should only be allowed to connect from
'localhost'. This ensures that someone cannot guess at
the root password from the network.

Disallow root login remotely? (Press y|Y for Yes, any other key for No) : No

... skipping.
By default, MySQL comes with a database named 'test' that
anyone can access. This is also intended only for testing,
and should be removed before moving into a production
environment.

Remove test database and access to it? (Press y|Y for Yes, any other key for No) : Yes
- Dropping test database...
Success.

- Removing privileges on test database...
Success.

Reloading the privilege tables will ensure that all changes
made so far will take effect immediately.

Reload privilege tables now? (Press y|Y for Yes, any other key for No) : Yes
Success.

All done!

Install NGINX

I installed NGINX to allow Adminer MySQL GUI to be used

I ran these commands to install NGINX.

sudo apt update
sudo apt upgrade
sudo apt-get install nginx

I edited my NGINX configuration as required.

  • Set a web server root
  • Set desired headers
  • Optimized NGINX (see past guides here, here and here)

I reloaded NGINX

sudo nginx -t
sudo nginx -s reload
sudo systemctl restart nginx

Install PHP

I followed this guide to install PHP on Debian.

sudo apt update
sudo apt upgrade

sudo apt install ca-certificates apt-transport-https
wget -q https://packages.sury.org/php/apt.gpg -O- | sudo apt-key add -
echo "deb https://packages.sury.org/php/ stretch main" | sudo tee /etc/apt/sources.list.d/php.list

sudo apt update
sudo apt install php7.2
sudo apt install php-pear php7.2-curl php7.2-dev php7.2-mbstring php7.2-zip php7.2-mysql php7.2-xml php7.2-cli php7.2-common

Install PHP FPM

apt-get install php7.2-fpm

Increase Upload Limits

You may need to temporarily increase upload limits in NGINX and PHP before you can restore a WordPress database. My feabry.com blog is about 87MB.

Add “client_max_body_size 100M;” to “/etc/nginx/nginx.conf”

Add the following to “/etc/php/7.2/fpm/php.ini”

  • post_max_size = 100M
  • upload_max_filesize = 100M

Restore a backup of my MySQL database in MySQL

You can now use Adminer to restore your blog to MySQL. Read my post here on Adminer here. I used Adminer to move my WordPress site from CPanel to a self-managed server a year ago.

First login to your source server and export your desired database then login to the target server and import the database.

Firewall Check

Don’t forget to allow your WordPress site’s 2x Public IP’s and 1x Private IP to access port 3306 in your UpCloud Firewall.

How to check open ports on your current server

sudo netstat -plunt

Set MySQL Permissions

Open MySQL

mysql --host=localhost --user=root --password=***************************************************************************

I ran these statements to grant the user logging in on the nominate IP’s access to MySQL.

mysql>
GRANT ALL ON databasenmae.* TO [email protected] IDENTIFIED BY '***********sql*user*password*************';
GRANT ALL ON databasenmae.* TO [email protected] IDENTIFIED BY '***********sql*user*password*************';
GRANT ALL ON databasenmae.* TO [email protected] IDENTIFIED BY '***********sql*user*password*************';
GRANT ALL ON databasenmae.* TO [email protected] IDENTIFIED BY '***********sql*user*password*************';

Reload permissions in MySQL

FLUSH PRIVILEGES;

Allow access to the Debian machine from known IP’s

Edit “/etc/host.allow”

Additions (known safe IP’s that need access to this MySQL remotely).

mysqld : IPv4Server1PublicAddress : allow
mysqld : IPv4Server1PrivateAddress : allow
mysqld : IPv4Server2PublicAddress : allow
mysqld : IPv4Server1PrivateAddress : allow

mysqld : ALL : deny

Tell MySQL to listen on

Edit “/etc/mysql/my.cnf”

Added..

[mysqld]
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
language = /usr/share/mysql/English
bind-address = DebianServersIntenalIPv4Address

I guess you could change the port to something random???

Restart MySQL

sudo service mysql restart

Install a second local firewall on Debian

Install ufw

sudo apt-get instal ufw

Do add the IP of your desired server or VPN to access SSH

sudo ufw allow from 123.123.123.123 to any port 22

Do add the IP of your desired server or VPN to access WWW

sudo ufw allow from 123.123.123.123 to any port 80

Now add the known IP’s (e.g any web servers public (IPv4/IPv6) or Private IP’s) that you wish to grant access to MySQL (e.g the source website that used to have MySQL)

sudo ufw allow from 123.123.123.123 to any port 3306

Do add UpCloud DNS Servers to your firewall

sudo ufw allow from 94.237.127.9 to any port 53
sudo ufw allow from 94.237.40.9 to any port 53
sudo ufw allow from 2a04:3544:53::1 to any port 53
sudo ufw allow from 2a04:3540:53::1 to any port 53

Add all other rules as needed (if you stuff up and lock your self out you can login to the server with the Console on UpCloud)

Restart the ufw firewall

sudo ufw disable
sudo ufw enable

Prevent MySQL starting on the source server

Now we can shut down MySQL on the source server (leave it there just in case).

Edit “/etc/init/mysql.conf”

Comment out the line that contains “start on ” and save the file

and run

sudo systemctl disable mysql

Reboot

shutdown -r now

Stop and Disable NGINX on the new DB server

We don’t need NGINX running now the database has been imported with Adminer.

Stop and prevent NGINX from starting up on startup.

/etc/init.d/nginx stop
sudo update-rc.d -f nginx disable
sudo systemctl disable nginx

Check to see if MySQL is Disabled

service mysql status
* mysql.service - MySQL Community Server
Loaded: loaded (/lib/systemd/system/mysql.service; disabled; vendor preset: enabled)
Active: inactive (dead)

Yep

Test access to the database server in PHP code

Add to dbtest.php

<em>SELECT guid FROM wp_posts</em>()<br />
<ul><?php

//External IP (charged after quota hit)
//$servername = 'db.yourserver.com';

//Private IP (free)
//$servername = '10.x.x.x';

$username = 'username';
$password = '********your*password*********';
$dbname = 'database';

// Create connection
$conn = new mysqli($servername, $username, $password, $dbname);
// Check connection
if ($conn->connect_error) {
    die("Connection failed: " . $conn->connect_error);
}

$sql = 'SELECT guid FROM wp_posts';
$result = $conn->query($sql);

if ($result->num_rows > 0) {
    // output data of each row
    while($row = $result->fetch_assoc()) {
        echo $row["guid"] . "<br>";
    }
} else {
    echo "0 results";
}
$conn->close();
?></ul>
Done

Check for open ports.

You can install nmap on another server and scan for open ports

Install nmap

sudo apt-get install nmap

Scan a server for open ports with nmap

You should see this on a server that has access to see port 3306 (port 3306 should not be visible by non-whitelisted IP’s).  Port 3shouldoudl not be seen via everyone.

sudo nmap -PN db.yourserver.com

Starting Nmap 7.40 ( https://nmap.org ) at 2018-07-20 14:15 UTC
Nmap scan report for db.yourserver.com (IPv4IP)
Host is up (0.0000070s latency).
Other addresses for db.yourserver.com (not scanned): IPv6IP
Not shown: 997 closed ports
PORT     STATE SERVICE
3306/tcp open  mysql

You should see something like this on a server that has access to see port 80/443 (a web server)

sudo nmap -PN yourserver.com

Starting Nmap 7.40 ( https://nmap.org ) at 2018-07-20 14:18 UTC
Nmap scan report for db.yourserver.com (IPv4IP)
Host is up (0.0000070s latency).
Other addresses for db.yourserver.com (not scanned): IPv6IP
Not shown: 997 closed ports
PORT     STATE SERVICE
80/tcp   open  http
443/tcp   open  https

I’d recommend you use a service like https://pentest-tools.com/network-vulnerability-scanning/tcp-port-scanner-online-nmap# to check for open ports.  https://hackertarget.com/tcp-port-scan/ is a great tool too.

https://www.infobyip.com/tcpportchecker.php is also a free port checker that you can use to verify individual closed ports.

Screeshot of https://www.infobyip.com/tcpportchecker.php

Hardening MySQL and Debian

Read: https://www.debian.org/doc/manuals/securing-debian-howto/ap-checklist.en.html

Configuring WordPress use the dedicated Debian VM

On the source server that used to have MySQL edit your wp-config.php file for WordPress.

Remove

define('DB_HOST', 'localhost');

add (read the update below, I changed the DNS IP to the Private IP to have free traffic)

//Oriinal localhost
//define('DB_HOST', 'localhost');

//New external host via DNS (Charged after quota hit)
//define('DB_HOST', 'db.fearby.com');

//New external host via Private IP (Free)
define('DB_HOST','10.x.x.x');

Restart NGINX

sudo nginx -t
sudo nginx -s reload
sudo systemctl restart nginx

Restart PHP-FPM

service php7.2-fpm restart

Conclusion

Nice, I seem to have shaved off 0.3 seconds in load times (25% improvement)

1sec gtmtrix load time

Update: Using a Private IP or Public IP between WordPress and MySQL servers

After I released this blog post (version 1.0 with no help from UpCloud) UpCloud contacted me and said the following.

Hello Simon,

I notice there's no mention of using the private network IPs. Did you know that we automagically assign you one when you deploy with our templates. The private network works out of the box without additional configuration, you can use that communicate between your own cloud servers and even across datacentres.

There's no bandwidth charge when communicating over private network, they do not go through public internet as well. With this, you can easily build high redundant setups.

Let me know if you have any other questions.

--
Kelvin from UpCloud

I will have updated my references in this post and replace the public IP address (that is linked to DNS record for db.fearby.com) and instead use the private ip address (e.g 10.x.x.x), your servers private IP address is listed against the public IPv$ and IPv6 address.

I checked that the local ufw firewall did indeed allow the private IP access to MySQL.

sudo ufw status numbered |grep 10.x.x.x
[27] 3306                       ALLOW IN    10.x.x.x

On my new Debian MySQL server, I edited the file /etc/mysql/my.cnf and changed the IP to the private IP and not the public IP.

Now it looked like

[mysqld]
user            = mysql
pid-file        = /var/run/mysqld/mysqld.pid
socket          = /var/run/mysqld/mysqld.sock
port            = 3306
basedir         = /usr
datadir         = /var/lib/mysql
tmpdir          = /tmp
language        = /usr/share/mysql/English
bind-address    = 10.x.x.x

(10.x.x.x  my Debian servers private IP)

On my WordPress instance, I edited the file  /www-root/wp-config.php

I added the new private host

//Oriinal localhost
//define('DB_HOST', 'localhost');

//New external host via DNS (Charged after quota hit)
//define('DB_HOST', 'db.fearby.com');

//New external host via Private IP (Free)
define('DB_HOST','10.x.x.x');

(10.x.x.x  my Debian servers private IP)

Alos on Debian/MySQL ensure you have granted access to the private IP of the WordPress server

Edit /etc/host.allow

Add

mysqld : 10.x.x.x : allow

Restart MySQL

sudo systemctl restart mysql

TIP: Enable UpCloud Backups

Do setup automatic backups (and or take manual backups). Backups are an extra charge but are essential IMHO.

UpCloud backups

Troubleshooting

If you can’t access MySQL log back into MySQL

mysql --host=localhost --user=root --password=***************************************************************************

and run

GRANT ALL PRIVILEGES ON *.* TO [email protected]'%' IDENTIFIED BY '***********sql*user*password*************''; FLUSH PRIVILEGES;

Reboot

Lower Upload Limits

Don’t forget to lower file upload sizes in NGINX and PHP (e.g 2M) now that the database has been restored.

I hope this guide helps someone.

TIP: Sign up to UpCloud using this link to get $25 free UpCloud VM credit.

https://www.upcloud.com/register/?promo=D84793

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v1.6 Changed Public IP use to private IP use to ensure we are not charged when the serves sage goes over the quota

v1.5 Fixed 03 type (should have been 0.3)

v1.4 added disable nginx info

v1.3 added https://www.infobyip.com/tcpportchecker.php

v1.1 added https://hackertarget.com/tcp-port-scan/

v1.0 Initial Post

Filed Under: Debian, MySQL, VM, Wordpress Tagged With: 14, a, and, Connect, debian, dedicated, different, from, install, MySQL, Setup, Subdomain, to, vm, wordpress

Setting up the Debian Kali Linux distro to perform penetration testing of your systems

March 7, 2018 by Simon

This post will show you how to setup the Kali Linux distro to perform penetration testing of your systems

I have a number of guides on moving hasting away form CPanel, Setting up VM’s on AWS, Vultr or Digital Ocean along with installing and managing WordPress from the command line. Securing your systems is very important (don’t stop) and keep learning (securing ubuntu in the cloud, securing checklist, run a Lynis system audit etc)

snip from: https://www.kali.org/about-us/

“Kali Linux is an open source project that is maintained and funded by Offensive Security, a provider of world-class information security training and penetration testing services. In addition to Kali Linux, Offensive Security also maintains the Exploit Database and the free online course, Metasploit Unleashed.”

Download Kali

I downloaded the torrent version (as the HTTP version kept stopping (even on 50/20 NBN).

Download Kali

After the download finished I checked the SHA sum to verify it’a integrity

cd /Users/username/Downloads/kali-linux-2018.1-amd64
shasum -a 256 ./kali-linux-2018.1-amd64.iso 
ed88466834ceeba65f426235ec191fb3580f71d50364ac5131daec1bf976b317  ./kali-linux-2018.1-amd64.iso

A least it matched the known (or hacked) hash here.

Installing Parallels in a VM on OSX

I use Parallels 11 on OSX to set up a VM os Demina Kali, you can use VirtualBox, VMWare etc.

VM Setup in Parallels

Hardware: 2x CPU, 2048MB Ram, 32MB Graphics, 64GB Disk.

I selected Graphical Install (English, Australia, American English, host: kali, network: hyrule, New South Wales, Partition: Guided – entire disk, Default, Default, Default, Continue, Yes, Network Mirror: Yes, No Proxy, Installed GRUB bootloader on VM HD.

Post Install

Install Parallel Tools

Official Guide: https://kb.parallels.com/en/123968

I opened the VM then selected the Actions then Install Parallels Tools, this mounted /media/cdrom/, I copied all contents to /temp/

As recommended by the Parallels instal bash script I updated headers.

apt install linux-headers-4.14.0-kali1-amd64

Then the following from https://kb.parallels.com/en/123968

apt-get clean
apt-get update
apt-get upgrade -y
apt-get dist-upgrade -y
apt-get install dkms kpartx printer-driver-postscript-hp

Parallels will not install, I think I need to upgrade to parallel 12 or 12 as the printer driver detection is not detecting (even though it is installed).

Installing Google Chrome

I used the video below

I have to run chrome with

/usr/bin/gogole-chrome-stable %U --no-sandbox --user-data=dir &

It works.

Chrome

Running your first remote vulnerability scan in Kali

I found this video useful in helping me scan and check my systems for exploits

Simple exploit search in Armitage (metasploit)

Armitage Scan

A quick scan of my server revealed three ports open and (22, 80 and 443). Port 80 redirects to 443 and port 22 is firewalled.  I have WordPress and exploits I rued failed to work thanks to patching (always stay ahead of patching and updating of software and the OS.

k006-ports

Without knowing what I was doing I was able to check my WordPress against known exploits. 

If you open the Check Exploits menu at the end of the Attacks menu you can do a bulk exploit check.

kali_bulk

WP Scan

Kali also comes with a WordPress scanner

wpscan --url https://fearby.com

This will try and output everything from your web server and WordPress plugins.

/xmlrpc.php was found and I was advised to deny access to that file in NGINX. xmlrpc.php is ok but can be used in denial of service attacks.

location = /xmlrpc.php {
	deny all;
	access_log off;
	log_not_found off;
}

I had a hit for a vulnerability in a Youtube Embed plugin but I had a patched version.

k007-wpscan

TIP: Check your WordPress often.

More to come (Draft Post).

  • OWASP scanner
  • WPSCAN
  • Ethical Hacker modules
  • Cybrary training
  • Sent tips to @FearbySoftware

Tips

Don’t have unwanted ports open, securely installed software, Use unattended security updates in Ubuntu, update WordPress frequently and limit plugins and also consider running more verbose audit tools like Lynis.

More Reading

Read my OWASP Zap guide on application testing and Cloudflare guide.

I hope this guide helps someone.

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v1.2 added More Reading links.

v1.1 Added bulk exploit check.

v1.0 Initial post

Filed Under: Exploit, Linux, Malware, Security, Server, SSH, Vulnerability Tagged With: debian, distro, Kali, Linux, of, penetration, perform, Setting, systems, testing, the, to, up, your

Setting up a fast distributed MySQL environment with SSL

September 13, 2016 by Simon

The following is a guest post from Shane Bishop from https://ewww.io/ (developer of the awesome EWWW Image Optimizer plugin for WordPress). Ready my review of this plugin here.

ewww3

Setting up a fast distributed MySQL environment with SSL

I’m a big fan of redundancy and distribution when it comes to network services. I don’t like to keep all my servers in one location, or even with a single provider. I’m currently using three different providers right now for various services. But when it comes to database communication, this poses a bit of a problem. Naturally, you would implement a firewall to restrict connections only to specific IP addresses, but if you’ve got servers all across the United States (or the globe), the communication is completely unencrypted by default.

Fortunately, MySQL has the ability to secure those communications and even require that specific user accounts use encryption for all communication. So, I’m going to show you how to setup that encryption, give a brief overview of setting up MySQL replication and give you several examples of different ways to securely connect to your database server(s). I used several different resources in setting this up for EWWW I.O. but none of them had everything I needed, or some had critical errors in them:

Setting up MySQL and secure connections

Getting Started with MySQL over SSL

How to enable SSL for MySQL server and client

I use Debian 8 (fewer major releases than Ubuntu, and rock-solid stability), so these instructions will apply to MySQL 5.5 and PHP 5.6, although most of it will work fine on any system. If you aren’t using PHP, you can just skip that section, and apply this to MySQL client connections, and replication. I’ll try to point out any areas where you might have difficulty on other versions, and you’ll need to modify any installation steps that use apt-get to use yum instead if you’re on CentOS, RHEL, or SuSE. If you’re running Windows, sorry, but just stop it. I would never trust a Windows server to do any of these things on the public Internet even with secured connections. You could attempt to do some of this on a Windows box for testing, but you can setup a Linux virtual machine using VirtualBox for free if you really want to test things out locally.

Setup the Server

First, we need to install the MySQL server on our system (you should always use sudo, instead of logging in as root, as a matter of “best practice”):

sudo apt-get install mysql-server

The installation will ask for a root password, and for you to confirm it. This is the account that has full and complete privileges on your MySQL server, so pick a good one. If this gets compromised, all is lost (or very nearly). Backups are your best friend, but even then it might be difficult to know what damage was done, and when. You’ll also want to run this, to make sure your server isn’t allowing test/guest access:

sudo mysql_secure_installation

You should answer yes to just about everything, although you don’t have to change your root password if you already set a good one. And just to make sure I’m clear on this. The root password here is not the same as the root password for the server itself. This root password is only for MySQL. You shouldn’t even ever use the root login on your server, EVER. It should be disabled so that you can only run privileged operations as sudo. Which you can do like this:

sudo passwd -l root

That command just disabled the root user, and should also be a good test to verify you already have an account that can sudo successfully, although I’d recommend testing it with something a little less drastic before you disable the root login.

Generating Certificates & Keys for the server

Normally, setting up secure connections involves purchasing a certificate from an established Certificate Authority (CA), and then downloading that certificate to your machine. However, the prevailing attitude with MySQL seems to be that you should build your own CA so that no one else has any influence on the private keys used to issue your server certificates. That said, you can still purchase a cert if that feels like the right route to go for you. Every organization has different needs, and I’m a bit new to the MySQL SSL topic, so I won’t pretend to be an expert on what everyone should do.

The Certificate Authority consists of a private key, and a CA certificate. These are then used to generate the server and client certificates. Each time you generate a certificate, you first need a private key. These private keys cannot be allowed to fall into the wrong hands, but you also need to have them available on the server, as they are used in establishing a secure connection. So if anyone else has access to your server, you should make sure the permissions are set so that only the root user (or someone with sudo privileges) can access them.

The CA and your server’s private key are used to authenticate the certificate that the server uses when it starts up, and the CA certificate is also used to validate any incoming client certificates. By the same token, the client will use that same CA certificate to validate the server’s certificate as well. I store the bits necessary in the /etc/mysql/ directory, so navigate into that directory, and we’ll use that as a sort of “working directory”. Also, the first command here lets you establish a “sudo shell” so that you don’t have to type sudo in front of every command. Let’s generate the CA private key:

sudo -s
cd /etc/mysql/
openssl genrsa 2048 > cakey.pem

Next, generate a certificate based on that private key:

openssl req -sha256 -new -x509 -nodes -days 3650 -key cakey.pem > cacert.pem

Of note are the -sha256 flag (do not use -sha1 anymore, it is weak), and the certificate expiration, set by “-days 3650” (10 years). Answer all the questions as best you can. The common name (CN) here is usually the hostname of the server, and I try to use the same CN throughout the process, although it shouldn’t really matter what you choose as the CN. If you follow my instructions, the CN will not be validated, only the client and server certificates get validated against the CA cert, as I already mentioned. Especially if you have multiple servers, and multiple servers acting as clients, the CN values would be all over the place, so best to keep it simple.

So the CA is now setup, and we need a private key for the server itself. We’ll generate the key and the certificate signing request (CSR) all at once:

openssl req -sha256 -newkey rsa:2048 -days 3650 -nodes -keyout server-key.pem > server-csr.pem

This will ask many of the same questions, answer them however you want, but be sure to leave the passphrase empty. This key will be needed by the MySQL service/daemon on startup, and a password would prevent MySQL from starting automatically. We also need to export the private key into the RSA format, or MySQL won’t be able to read it:

openssl rsa -in server-key.pem -out server-key.pem

Lastly, we create the server certificate using the CSR (based on the server’s private key) along with the CA certificate and key:

openssl x509 -sha256 -req -in server-csr.pem -days 3650 -CA cacert.pem -CAkey cakey.pem -set_serial 01 > server-cert.pem

Now we have what we need for the server end of things, so let’s edit our MySQL config in /etc/mysql/my.cnf to contain these lines in the [mysqld] section:

ssl-ca=/etc/mysql/cacert.pem
ssl-cert=/etc/mysql/server-cert.pem
ssl-key=/etc/mysql/server-key.pem

If you are using Debian, those lines are probably already present, but commented out (with a # in front of them). Just remove the # from those three lines. If this is a fresh install, you’ll also want to set the bind-address so that it will allow communication from other servers:

bind-address = 198.51.100.10 # replace this with your actual IP address

or you can let it bind to all interfaces (if you have multiple IP addresses):

bind-address = *

Then restart the MySQL service:

sudo service mysql restart

Permissions

If this is an existing MySQL setup, you’ll want to wait until you have all the client connections setup to require SSL, but on a new install, you can run this to setup a new user with SSL required:

GRANT ALL PRIVILEGES ON 'database'.* TO 'database-user'@'%' IDENTIFIED BY 'reallysecurepassword' REQUIRE SSL;

I recommend creating individual user accounts for each database you have, so substitute the name of your database in the above command, as well as replacing the database-user and “really secure password” with suitable values. The command above also allows them to connect from anywhere in the world, and you may only want them to connect from a specific host, so you would replace the ‘%’ with the IP address of the client. I prefer to use my firewall to determine who can connect, as it is a bit easier than running a GRANT statement for every single host that is permitted. One could use a wildcard hostname like *.example.com but that would entail a DNS lookup for every connection unless you make sure to list all your addresses in /etc/hosts on the server (yuck). Additionally, using your firewall to limit which hosts can connect helps prevent brute-force attacks. I use ufw for that, which is a nice and simple command-line interface to iptables. You also need to run this after you GRANT privileges:

FLUSH PRIVILEGES;

Generating a Certificate and Key for the client

With most forms of encryption, only the server needs a certificate and key, but with MySQL, both server and client can have encryption keys. A quick test from my local machine indicated that it would automatically trust the server cert when using the MySQL client, but we’ll setup the client to use encryption just to be safe. Since we already have a CA setup on the server, we’ll generate the client cert and key on the server. First, the private key and CSR:

openssl req -sha256 -newkey rsa:2048 -days 3650 -nodes -keyout client-key.pem > client-csr.pem

Again, we need to export the key to the RSA format, or MySQL won’t be able to view it:

openssl rsa -in client-key.pem -out client-key.pem

And last step is to create the certificate, which is again based off a CSR generated from the client key, and sign the certificate with the CA cert and key:

openssl x509 -sha256 -req -in client-req.pem -days 3650 -CA cacert.pem -CAkey cakey.pem -set_serial 01 > client-cert.pem

We now need to copy three files to the client. The certs are just text files, so you can copy and paste them, or you can use scp to transfer them:

  • cacert.pem
  • client-key.pem
  • client-cert.pem

If you don’t need the full mysql-server on the client, or you just want to test it out, you can install the mysql-client like so:

sudo apt-get install mysql-client

Then, open /etc/mysql/my.cnf and put these three lines in the [client] section (usually near the top):

ssl-ca = /etc/mysql/cacert.pem
ssl-cert = /etc/mysql/client-cert.pem
ssl-key = /etc/mysql/client-key.pem

You can then connect to your server like so:

mysql -h 198.51.100.10 -u database-user -p

It will ask for a password, which you set to something really awesome and secure, right? At the MySQL prompt, you can just type the following command shortcut, and look for the SSL line, which should say something like “Cipher in use is …”

\s

You can also specify the –ssl-ca, –ssl-cert, and –ssl-key settings on the command line in the ‘mysql’ command to set the locations dynamically if need be. You may also be able to put them in your .my.cnf file (the leading dot makes it a hidden file, and it should live in ~/ which is your home directory). So for me that might be /home/shanebishop/.my.cnf

Using SSL for mysqldump

To my knowledge, mysqldump does not use the [client] settings, so you can specify the cert and key locations on the command line like I mentioned, or you can add them to the [mysqldump] section of /etc/mysql/my.cnf. To make sure SSL is enabled, I run it like so:

mysqldump --ssl -h 198.51.100.10 -u database-user -p reallysecurepassword > database-backup.sql

Setup Secure Connection from PHP

That’s all well and good, but most of the time you won’t be manually logging in with the mysql client, although mysqldump is very handy for automated nightly backups. I’m going to show you how to use SSL in a couple other areas, the first of which is PHP. It’s recommended to use the “native driver” packages, but from what I could see, the primary benefit of the native driver is decreased memory consumption.  There just isn’t much to see in the way of speed improvement, but perhaps I didn’t look long enough. However, being one to follow what the “experts” say, you can install MySQL support in PHP like so:

sudo apt-get install php5-mysqlnd

If you are using PHP 7 on a newer version of Ubuntu, the “native driver” package is now standard:

sudo apt-get install php7.0-mysql

If you are on a version of PHP less than 5.6, you can use the example code at the Percona Blog. However, in PHP 5.6+, certificate validation is a bit more strict, and early versions just fell over when trying to use the mysqli class with self-signed certificates like we have. Now that the dust has settled with PHP 5.6 though, we can connect like so:

<?php
$server = '198.51.100.10';
$dbuser = 'database-user';
$dbpass = '[email protected]@s$worD';
$database = 'database';
$connection = mysqli_init();
if ( ! mysqli_real_connect( $connection, $server, $dbuser, $dbpass', $database, 3306, '/var/run/mysqld/mysqld.sock', MYSQLI_CLIENT_SSL ) ) { //optimize1
    error_log( 'Connect Error (' . mysqli_connect_errno() . ') ' . mysqli_connect_error() );
    die( 'Connect Error (' . mysqli_connect_errno() . ') ' . mysqli_connect_error() );
}
$result = mysqli_query( $connection, "SHOW STATUS like 'Ssl_cipher'" );
print_r( mysqli_fetch_assoc( $result ) );
mysqli_close( $connection );
?>

Saving this as mysqli-ssl-test.php, you can run it like this, and you should get similar output:

[email protected]:~$ php mysqli-ssl-test.php
Array
(
  [Variable_name] => Ssl_cipher
  [Value] => DHE-RSA-AES256-SHA
)

Setup Secure (SSL) Replication

That’s all fine for a couple servers, but at EWWW.IO. I quickly realized I could speed things up if each server had a copy of the database. In particular, a significant speed improvement can be had if you setup all SELECT queries to use the local database (replicated from the master). While a query to the master server might take 50ms or more, querying the local database gives you sub-millisecond query times. Beyond that, I also wanted to have redundant write ability, so I set up two masters that would replicate off each other and ensure I never miss an UPDATE/INSERT/DELETE transaction if one of them dies. I’ve been running this setup since the Fall of 2013, and it has worked quite well. There are a few things you have to watch out for. The biggest is if a master server has a hard reboot, and MySQL doesn’t get shut down properly, you have to re-setup the replication on any slaves that communicate with that master, as the binary log gets corrupted. You also have to resync the other master in a similar manner.

The other things to be careful of are conflicting INSERT statements. If you try to INSERT two records with the same primary key from two different servers, it will cause a collision if those keys are set to be UNIQUE. You also have to be careful if you are using numerical values to track various data points. Use MySQLs built-in arithmetic, rather than trying to query a value, add to it in your code, and then updating the new value in a separate query.

So first I’ll show you how to setup replication (just basic master to slave), and then how to make sure that data is encrypted in transit. We should already have the MySQL server installed from above, so now we need to make some changes to the master configuration in /etc/mysql/my.cnf. All of these changes should be made in the [mysqld] section:

max_connections = 500 # the default is 100, and if you get a lot of servers running in your pool, that may not cut it
server-id = 1 # any numerical value will do, but every server should have a unique ID, I started at 1 for simplicity
log-bin = /var/log/mysql/mysql-bin.log
log-slave-updates = true # this one is only needed if you're running a dual-master setup

I’ve also just discovered that it is recommended to set sync_binlog to 1 when using InnoDB, which I am. I haven’t had a chance to see how that impacts performance, so I’ll update this after I’ve had a chance to play with it. The beauty of that is it *should* avoid the problems with a server crash that I mentioned above. At most, you would lose 1 transaction due to an improper server shutdown. All my servers use SSD, so the performance hit should be minimal, but if you’re using regular spinning platters, then be careful with the sync_binlog setting.

Next, we do some changes on the slave config:

server-id = 2 # make sure the id is unique
report-host = myserver.example.com # this should also be unique, so that your master knows which slave it is talking to
log-bin = /var/log/mysql/mysql-bin.log

Once that is setup, you can run a GRANT statement similar to the one above to add a user to do replication, or you can just give that user REPLICATION_SLAVE privileges.

IMPORTANT: If you run this on an existing slave-master setup, it will break replication, as the REQUIRE SSL statement seems to apply to all privileges granted to this user, and we haven’t told it what certificate and key to use. So run the CHANGE MASTER TO statement further down, and then come back here to enforce SSL for your replication user.

GRANT REPLICATION SLAVE ON *.* TO 'database-user'@'%' REQUIRE SSL;

Now we’re ready to synchronize the database from the master to the slave the first time. The slave needs 3 things:

  1. a dump of the existing data
  2. the binary log filename, as MySQL adds a numerical identifier to the log-bin setting above, and increments this periodically as it the binary logs hit their max size
  3. the position within the binary log where the slave should start applying changes

The hard way (that I used 3 years ago), can be found in the MySQL Documentation. The easy way is to use mysqldump (found on a different page in the MySQL docs), which you probably would have used anyway for obtaining a dump of the existing data:

mysqldump --all-databases --master-data -u root -p > dbdump.db

By using the –master-data flag, it will insert items #2 and #3 into the SQL file generated, and you will avoid having to hand type the binary log filename and coordinates. At any rate, you then need to login via your MySQL client on the slave server, and run a few commands at the MySQL prompt to prep the slave for the import (replacing the values as appropriate:

mysql -uroot -p
mysql> STOP SLAVE;
mysql> CHANGE MASTER TO
    -> MASTER_HOST='master_host_name',
    -> MASTER_USER='replication_user_name',
    -> MASTER_PASSWORD='replication_password';
exit

Then you can import the dbdump.db file (copy it from the master using SCP or SFTP):

mysql -uroot -p < dbdump.db

Once that is imported, we want to make sure our replication is using SSL. You can also run this on an existing server to upgrade the connection to SSL, but be sure to STOP SLAVE first:

mysql> CHANGE MASTER TO MASTER_SSL=1,
    -> MASTER_SSL_CA='/etc/mysql/cacert.pem',
    -> MASTER_SSL_CERT='/etc/mysql/client-cert.pem',
    -> MASTER_SSL_KEY='/etc/mysql/client-key.pem';

After that, you can start the slave:

START SLAVE;

Give it a few seconds, but you should be able to run this to check the status pretty quick:

SHOW SLAVE STATUS\G;

A successfully running slave should say something like “Waiting for master to send event”, which simply indicates that it has applied all transactions from the master, and is not lagging behind.

If you have additional slaves to setup, you can use the same exact dbdump.db and all the SQL statements that followed the mysqldump command, but if you add them a month or so down the road, there are two ways of doing it:

  1. Grab a fresh database dump using the mysqldump command, and repeat all of the steps that followed the mysqldump command above.
  2. Stop the MySQL service on an existing slave and the new slave. Then copy the /var/lib/mysql/ folder to the new slave and make sure it is owned by the MySQL user/group: chown -R mysql:mysql /var/lib/mysql/ Lastly, start both slaves up again, and they’ll catch up pretty quick to the master.

Conclusion

In a distributed environment, securing your MySQL communications is an important step in preventing unauthorized access to your services. While it can be a bit daunting to put all the pieces together, it is well worth the effort to make sure no one can intercept traffic from your MySQL server(s). The EWWW I.O. API supports encryption at every layer of the stack, to make sure our customer information stays private and secure. Doing so in your environment creates trust with your user base, and customer trust is a precious commodity.

Security

As a precaution, do check your website often in https://www.shodan.io and see if it has open software or is known to hackers.
Shane Bishop

Contact: https://ewww.io/contact-us/

Twitter: https://twitter.com/nosilver4u

Facebook: https://www.facebook.com/ewwwio/

Check out the fearby.com guide on bulk optimizing images automatically in WordPress.  Official Site here: https://ewww.io/

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

V1.1 added shodan.io info

Filed Under: Cloud, Development, Linux, MySQL, Scalable, Security, ssl, VM Tagged With: certificate, cloud, debian, destributed, encrypted, fast, myswl, ssl

Primary Sidebar

Poll

What would you like to see more posts about?
Results

Support this Blog

Create your own server today (support me by using these links

Create your own server on UpCloud here ($25 free credit).

Create your own server on Vultr here.

Create your own server on Digital Ocean here ($10 free credit).

Remember you can install the Runcloud server management dashboard here if you need DevOps help.

Advertisement:

Tags

2FA (9) Advice (17) Analytics (9) App (9) Apple (10) AWS (9) Backup (21) Business (8) CDN (8) Cloud (49) Cloudflare (8) Code (8) Development (26) Digital Ocean (13) DNS (11) Domain (27) Firewall (12) Git (7) Hosting (18) IoT (9) LetsEncrypt (7) Linux (21) Marketing (11) MySQL (24) NGINX (11) NodeJS (11) OS (10) Performance (6) PHP (13) Scalability (12) Scalable (14) Security (45) SEO (7) Server (26) Software (7) SSH (7) ssl (17) Tech Advice (9) Ubuntu (39) Uncategorized (23) UpCloud (12) VM (45) Vultr (24) Website (14) Wordpress (25)

Disclaimer

Terms And Conditions Of Use All content provided on this "www.fearby.com" blog is for informational purposes only. Views are his own and not his employers. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. Never make changes to a live site without backing it up first.

Advertisement:

Footer

Popular

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Add Google AdWords to your WordPress blog

Security

  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • Setting up DNSSEC on a Namecheap domain hosted on UpCloud using CloudFlare
  • Set up Feature-Policy, Referrer-Policy and Content Security Policy headers in Nginx
  • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
  • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
  • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
  • Beyond SSL with Content Security Policy, Public Key Pinning etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Run an Ubuntu VM system audit with Lynis
  • Securing Ubuntu in the cloud
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider

Code

  • How to code PHP on your localhost and deploy to the cloud via SFTP with PHPStorm by Jet Brains
  • Useful Java FX Code I use in a project using IntelliJ IDEA and jdk1.8.0_161.jdk
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider
  • How to setup PHP FPM on demand child workers in PHP 7.x to increase website traffic
  • Installing Android Studio 3 and creating your first Kotlin Android App
  • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
  • How to use Sublime Text editor locally to edit code files on a remote server via SSH
  • Creating your first Java FX app and using the Gluon Scene Builder in the IntelliJ IDEA IDE
  • Deploying nodejs apps in the background and monitoring them with PM2 from keymetrics.io

Tech

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Is OSX Mojave on a 2014 MacBook Pro slower or faster than High Sierra
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • The case of the overheating Mac Book Pro and Occam’s Razor
  • Useful Linux Terminal Commands
  • Useful OSX Terminal Commands
  • Useful Linux Terminal Commands
  • What is the difference between 2D, 3D, 360 Video, AR, AR2D, AR3D, MR, VR and HR?
  • Application scalability on a budget (my journey)
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.

Wordpress

  • Replacing Google Analytics with Piwik/Matomo for a locally hosted privacy focused open source analytics solution
  • Setting web push notifications in WordPress with OneSignal
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Wordfence Security Plugin for WordPress
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
  • Moving WordPress to a new self managed server away from CPanel
  • Moving WordPress to a new self managed server away from CPanel

General

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Using the WinSCP Client on Windows to transfer files to and from a Linux server over SFTP
  • Connecting to a server via SSH with Putty
  • Setting web push notifications in WordPress with OneSignal
  • Infographic: So you have an idea for an app
  • Restoring lost files on a Windows FAT, FAT32, NTFS or Linux EXT, Linux XFS volume with iRecover from diydatarecovery.nl
  • Building faster web apps with google tools and exceed user expectations
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..

Copyright © 2023 · News Pro on Genesis Framework · WordPress · Log in

Some ads on this site use cookies. You can opt-out if of local analytics tracking by scrolling to the bottom of the front page or any article and clicking "You are not opted out. Click here to opt out.". Accept Reject Read More
GDPR, Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT