• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • Create a VM ($25 Credit)
  • Buy a Domain
  • 1 Month free Back Blaze Backup
  • Other Deals
    • Domain Email
    • Nixstats Server Monitoring
    • ewww.io Auto WordPress Image Resizing and Acceleration
  • About
  • Links

IoT, Code, Security, Server Stuff etc

Views are my own and not my employer's.

Personal Development Blog...

Coding for fun since 1996, Learn by doing and sharing.

Buy a domain name, then create your own server (get $25 free credit)

View all of my posts.

  • Cloud
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to buy a new domain and SSL cert from NameCheap, a Server from Digital Ocean and configure it.
    • Setting up a Vultr VM and configuring it
    • All Cloud Articles
  • Dev
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to setup pooled MySQL connections in Node JS that don’t disconnect
    • NodeJS code to handle App logins via API (using MySQL connection pools (1000 connections) and query parameters)
    • Infographic: So you have an idea for an app
    • All Development Articles
  • MySQL
    • Using the free Adminer GUI for MySQL on your website
    • All MySQL Articles
  • Perf
    • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Measuring VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 1 of 4
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Setting up a website to use Cloudflare on a VM hosted on Vultr and Namecheap
    • All Performance Articles
  • Sec
    • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
    • Using OWASP ZAP GUI to scan your Applications for security issues
    • Setting up the Debian Kali Linux distro to perform penetration testing of your systems
    • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
    • PHP implementation to check a password exposure level with Troy Hunt’s pwnedpasswords API
    • Setting strong SSL cryptographic protocols and ciphers on Ubuntu and NGINX
    • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
    • All Security Articles
  • Server
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All Server Articles
  • Ubuntu
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Useful Linux Terminal Commands
    • All Ubuntu Articles
  • VM
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All VM Articles
  • WordPress
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
    • How to backup WordPress on a host that has CPanel
    • Moving WordPress to a new self managed server away from CPanel
    • Moving a CPanel domain with email to a self managed VPS and Gmail
    • All WordPress Articles
  • All

VM

Moving Oracle Virtual Box Virtual Machines to another disk

October 17, 2021 by Simon

 

Read First: Installing Windows 11 in a Virtual Machine on Windows 10 to test software compatibility

I recently Installed Windows 11 in a Virtual Machine on Windows 10 to test software compatibility, I installed it onto a 10TB spinning magnetic drive and regretted it (it was super slow).  I purchased a Samsung 980 Pro 2TB SSD so I could move my Games and Virtual Machines onto it.

After a bit of Googling people said it was near impossible to move Virtial Machine (or that you had to detach disks and go through many steps to move Virtiual Machines).

This is not true.

Source and Destination

I had Virtual Machines stored on M:\VirtualMachines (10TB Western Digital Gold Magnetic Spinning Hard Drive) and I wanted to move them to G:\VirtualMachines (Samsung 980 Pro NVMe Solid State Drive).

I also had ISO Images Stored on “B:\Installs\700 Viruial Machine OS Installs” and I wanted to move them to “S:\Installs\700 Virtual Machine OS Installs”

How to Move Oracle Virtual Machines

Shutdown your Oracle Virtual Machines and close Oracle VirtualBox

Open Windows Explorer and Navigate to “C:\Users\simon\.VirtualBox” (but chnage Simon to your username).

The 2 files we need to edit are “VirtualBox.xml” and “VirtualBox.xml-prev”

Windows explorer view

Edit the files and chnage the paths

Edit XMs FIle

Change the ISO paths and or Virtual Machine Paths.

Now you can start the VMs from the new drive. Set Solid State if the new drive is a solid state for extra performance.

Virtual Box and VM Started

You can also searhc for “*.vbox*” files and review Virtiual Machine Settings also as they are just XML files

Enjoy

Filed Under: VM

Installing Windows 11 in a Virtual Machine on Windows 10 to test software compatibility

October 17, 2021 by Simon

This is a simple post that shows how I Installed Windows 11 on Windows 10 inside a Oracle VM Virtual Box Virtual Machine.

I love Windows 10 and have many applications I rely on for work and fun.  Before I update to Windows 11, I want to test my software in a Virtual installation of Windows 11.  

I could:

  1. Remove my Windows 10 Drive (nVME SSD) and insert new drive and install a fresh copy of Windows 11.,
  2. Upgrade my primary Windows 10 to Windows 11
  3. Install Windows 10 inside a virtual machine.

If all goes well I will upghrade my primary Windws 10 machine.

Virtual Machines

A Virtual Machine allows youn to  install an Operating System in a virtual environment (as long as you have enough processor cores, memory and storage space).

I am collecting many Virtial Machines from WIndoiws 1 to WIndows 11.

View of Virtual Box and dozens of VM;s

I cna can run upto 6 VM’s at once (normally virtual Box does not like more than this running at once.)

6 VM's running at once

Downloading Windows 11

To downlaod Windows 11 visit https://www.microsoft.com/en-us/software-download/windows11

https://www.microsoft.com/en-us/software-download/windows11 Screenshot

I selected Windows 11 in the dropdown then clicked Download.

Download button

I selected English language

English language drop down.

I then selected 64-bit Download

64-bit Download

The Downlaod (ISO File) was 5.1GB)

Downloading

The ISO file downloaded

ISO file in Explorer

Before You Begin

Before you begin, set your Processors Power Plan to High or Ultimate (how to). Virtual Machines rely on a fast processor and having your power plan on Power Saving plan is a bad idea.

Windows Power Plans

You will need at least 4 spare processor cores, memeory etc

Also consult your Mainboard manual and enable Virtualizaion/SVM (Google will help)

 

Now I need to set this up as a Virtual Machine in VirtualBox

Setting up a Windows 11 Virtiual Machine

Download Oracle VM VirtualBox from Downloads – Oracle VM VirtualBox and install it.

In Oracle Virtual Box click Machine then New

File Menu in Virtual Box

I named the machine”Windows 11“, Set Type “Windows“, Version “Windows 10 (64bit)”, this is jujst to allow the mahcien to boot (its not gong to install Windows 10).

I set 8GB of memory and selected “Create a virtual hard disk now“

Virtual Box New Machine Form

I allocated 70GB storage to the disk (dynamically allocated) as a VDI file type

70GB Storage and VDI Type

Under System I enabled PAE/NX (Physical Address Extension – Wikipedia) and allocated 8 Processor cores (4 is enough).

Enabled PAE/NX, 8 Cores

Under Display I allocated 256MB video memory and enabled 3D Accelaration

Display Options Form

Under Storage for the virtual CD-ROM I attached the downloaded (above) ISO Image for WIndows 11.

Choose ISO Image for Virtual CD-ROM

The Windows 11 ISO Image was attached to the virtual CD-ROM

ISO Attached

I am now ready to strat the Windows 11 Virtual Machine for the first timn.

Starting the Windows 11 Virtual Machine

I started the Virtual Machine

Start VM

The Virtual Machine is starting

VM is starting Window

Windows 11 Boot Logo (this is a good)

Windows 11 Boot screen

Windows 11 is asking for a language and time and currency preference.

Windows language setup

I clicked Next then Install

Install now

Setup is starting

Setup is starting

I was prompted to enter a serial.

Enter serial

I purchased one from VIP-Scdkey: The Professional Marketplace to sell Digital Game keys, Gift Cards for #27.53 USD

These are single use keys

PC cant run

I received an error “This PC can’t run Windows 11. THis PC doesn’t meet the minimum requirements to install this version.”

www.vip-cdkey.com windows 11

Researched a Fix

I researched a fix and I needed to enable “Enable EFI(special OSes only)” in Virtual Bax.

Enable EUFI

Start the Virtual Machine Again

I started the VM again and was greeted with UEFI boot screen

EUFI Bios

Virtual Box was booting the Virtual Machine via UEFI

Virtual Box UEFI Boot screen

Turn Off TPM etc

I started the Setup again and wnt to the wizard step with “Install now”.

Install Now

I pressed Shift+F10 to reveal a command prompt

More Information here.

In the command prompt windows I typed “regedit” and pressed enter and added the keys as described here.

Regedit

I exited the registry editor and command prompt, pressed back and resumed the setup.

Setup is starting

I entered my serial number again

enter serial

I accepted the terms

accept terms

I selected Custom install.

Custom install

I clicked Next

nexy

Setupo was installing 🙂

Installing windows status

When setup finished the Virtiual Machien rebooted

UEFI Bios

WIndows was starting services

Starting Services

WIndows setup took many minutes.

Getting Ready

Windows rebooted again

UEFI Boot

Setup was processing for few minutes.

Just a Moment

Windows 11 boot screen is very bright.

WIndows logo

Windows 11 Setup

I selected Australia as the country

Region select

Setup was thinking.

waiting

I selected US as the keyboard type

keyboard layout

I skipped adding a second keyboard layout.

second keyboard layout

Windows was checking for updates

Checking for updates

Setup spent a few minutes thinking

please wait

I entered a PC name

Name your PC

First Boot

Now Windows 11 is almost ready

just a moment

Much thinking (I should have installed this on a solid state drive not a magentic spinning drive)

waiting

I selcted that I will use this PC for personal use

It looks like I cannot have a stand alone account. I entered my Microsoft email.

add microsoft account

I validated my Microsoft account

2fa

I have linked my Microsft account to Windows 11

linked ms account

I created a login pin

create pin

I set the pin

setup pin

The pin applied

pin applied

I turned off location and diagnostic data

privacy options

I ticked all experience options.

set experiences

I skipped pask OneDrive Backup

One Drive

I skiped Office Microsoft 365 trial

Office 365

I prefer to buy one off Office serial keys

www.vip-cdkey.com office

I skipped the XBox game pass

Game Pass

Windows 11 looked for updates

Windows 11 is getting ready again

getting ready

A few minutes

please wait

I should have installed this on a solid state drive

please wait

Windows 11 Setup is Complete

Yay, Windows 11 is installed, much bloatware was installed by default

1st boot desktop

Updates

I fiorst looked for Windows updates.

check for updates

I installed all software updates

download now

I rebooted Windows 11 one more time.

lock screen

Timezone

The time was wrong so I set the right timezone.

timezone

I set my timezone

timezone dropdown

I had to sync the time.

date time

Right Click Menu

I noticed that the traditiona rigth click menu for files has been moved to a sub menu

right click menu

Themes

Looks customizing your theme is back now.

themes

It looks like Microosft is now selling themes and icons.

buy themes

I enabled Desktop Icons

desktop settings

Inlike all the main icons

desktop icons

That is better

desktop screenshot

I uninstalled all the bloat software

uninstall bloatware

I uninstalled all other software that I was not going to use.

uninstall onedrive

Dark Theme

I set the Dark Theme

set dark theme

I opend Edge and turned off syncing my data

disable sync

I had to find the task manager

task manager

I linked my Android mobile phone

link phone

I noticed Virtual Desktops are back

virtual desktops

Boring Widgets

Widgets

Why is Microsoft parterning with Sky News?

Sky News Australia, Yuk

I hid all stories from Sky News Australia.

Hide Sky News

I set defaults in my Task Bar

TaspBar Settings

Task Manager has not changed

Task Manager

I disabled Teams

Disabled Teams

Virtual Box Guest Addons

I installed Oracle VirtualBox Guest addons

Installed Guest Addons

I opened the connected drive from Windows Explorer

Guest Addons Drive

Run VBoxWindowsAddons.exe

Run VBoxWindowsAddons.exe

I installed the Guest Addons

Guest Addons install wizard

Now I can run higher resolutions in the Virtual Machine

Resolution choices

Resize the Disk

I realised that 70Gb was not enough to install all of my applations.

I shutdown the WIndiows 11 Virtual Machiene and expanded the disk to 150GB in the Oracle Virtual Media Manager

Resize Disk

I booted Windows 11 and enterred the Disk Management Snap In and tried to expand the disk.

Partitions

The problem was a smaller partiton was between the end of the 70GB disk and the new free space.

I installed my valid copy of EaseUS Partiton Master software but it did noallow me to move and resize the partition.

I downloaded and installed AOEMI Partition Assistant.

I downloaded and installed AOEMI Partition Assistant.

I cliked “Resize/Move Partition” on the smaller partition.

Move Disk

I nextended the 70GB partition to 150GB

Resize

Long story short, the Deme version cannot make chnages. I purchased AOMEI Partition Assistant and moved the smaller partition and resized the 70GB disk to 150GB

Purchase AOEMI

AOEMI Partition Assistant created a Windoiws PE Boot Image to carry ouit the disk changes. Nice

Windows PE Boot

The disk operations started after the reboot

Progress bar

Nice progress bar.

Progress bar

All disk operations completed

Restart Now

Increased Disk

Yay, the Windows 11 disk has been increased.

Increased Disk

Testing Software

Now I can install all of my software and test it in WIndows 11.

  • Office 2021
  • Adobe CC (Photoshop, Adobe DC, Premere Pro, After Effects, Media Encoder)
  • Filezilla
  • WinDirStat
  • Discord
  • Blenmder
  • DUMo, SUMo
  • balenaEtcher
  • Sublime Text 3
  • Visual Studio Code
  • WinSCP
  • Putty
  • WinRAR
  • WInMerge
  • NotePad++
  • VLC Media Player
  • GIMP
  • CDBurnerXP
  • 7Zip
  • Agent Rancsack
  • YubiKey AUthenticator
  • Core Temp
  • CPU-Z, GPU-z
  • etc

I will not be able to tets games on this Virtula Machine though.

Disk Busy

All apps so far are working a treat.

The spinning magnetic disk is a bit slow though

Desktop Image

Conclusions

Windows 11 has presented no issues yet. Some things are hard to find but this will be ok, I will upgrade to Windows 11 when all the bugs are ironed out in a few months.

Fina Desktop Screen Grab

TPM Security

Windows 11 required your processor has TPM compatibility.

Recent Mainboard BIOS have been adding TPM support, but I bought a physical TPM chip to have BitLocker not die when I upgrade my BIOS.B

Physical TPM Chip (ASUS)

In my BIOS I can select the BIOS-based TPM (“Enable Firmware TPM) or the Physical TPM (Enable Discrete TPM)

BIOS TPM Choices

AMD Processors

It looks like AMD Processors are having issues with Windows 11 so I will hang off on installing Windows 11 on my main PC

I will move the Virtual Machines to a Solid State drive to get extra speed. Guide here: Moving Oracle Virtual Box Virtual Machines to another disk

Version: 1.1 Added Moving Oracle Virtual Box Virtual Machines to another disk (fearby.com)

Filed Under: VM, Windows

Connecting to a server via SSH with Putty

April 7, 2019 by Simon

This post aims to show how you can connect to a remote VM server using Telnet/SSH Secure shell with a free program called Putty on Windows. This not an advanced guide, I hope you find it useful.

2020 Update. I have stopped using Putty and WinSCP. I now use MobaXterm (a tabbed SSH client for Windows) as it is way faster than WinSCP and better than Putty. Read my review post of MobaXTerm here.

You will learn how to connect (via Windows) to a remote computer (Linux) over the Telnet protocol using SSH (Secure Shell). Once you login you can remotely edit web pages, learn to code, install programs or do just about anything.

Common Terms (Glossary)

  • Putty: Putty is a free program that allows you to connect to a server via Telnet. Putty can be downloaded from here.
  • Port: A port is a number given to a virtual lane on the internet (a port is similar to a frequency in radio waves but all ports share the same transport layer frequency on the internet). Older unencrypted webpages work on Port (lane 80), older mail worked on Port 25, encrypted web pages work on Port 443. Telnet (that SSH Secure Shell uses) used Port 22. Read about port numbers here.
  • SSH: SSH is a standard that allows you to securely connect to a server over the telnet protocol. Read more here.
  • Shell: Shell or Unix Shell is the name given to the interactive command line interface to Linux. Read more about the shell here.
  • Telnet: Telnet is a standard on the TCP/IP protocol that allows two-way communication between computers (all communicatin issent as characters and not graphics). Read more on telnet here and read about the TCP protocols here and here.
  • VM: VM stands for Virtual Machine and is a name given to a server you can buy (but it is owned by someone else). Read more here.

Read about other common glossary terms used on the Inetre here:
https://en.wikipedia.org/wiki/Glossary_of_Internet-related_terms

Background

If you want a webpage on the internet (or just a server to learn how to program) it’s easier to rent a VM for a few dollars a month and manage it yourself (with Telnet/SSH Secure Shell) than it is to buy a $5,000 server, place it in a data centre and pay for electricity and drive in every few days and update it. Remote management of VM servers via SSH/Secure Shell is the way for small to medium solutions.

  • A simple web hosting site may cost < $5 a month but is very limited.
  • A self-managed VM costs about $5 a month
  • A website service like Wix, Squarespace, Shopify or WordPress will cost about $30~99 a month.
  • A self-owned server will cost hundreds to thousands upfront.

There are pros and cons to all solutions above (e.g cost, security, scalability, performance, risk) but these are outside this post’s topic. I have deployed VMs on provides like AWS, Digital Ocean, Vultr and UpCloud for years. If you need to buy a VM you can use this link and get $25 free credit.

I used to use the OSX Operating System on Apple computers. I was used to using the VSSH software program to connect to servers deployed on UpCloud (using this method). With the demise of my old Apple Mac book (due to heat) I have moved back to using Windows (I am never using Apple hardware again until they solve the heat issues).

Also, I prefer to use Linux servers in the cloud (over say Windows) because I believe they are cheaper, faster and more secure.

Enough talking lets configure a connection.

Public and Private Keys?

Whenever you want to connect to a remote server via Telnet/SSH Secure Shell you will need a public and private key to encrypt communications between you and the remote server.

The public key is configured on your server (on Linux you add the public key to this file ~/.ssh/authorized_keys).

The private key is used by programs (usually on your local computer) to connect to the remote server.


How to create a Public and Private Key on Linux

I usually run this command on Ubuntu or Debian Linux to generate a public and private SSH key.

sudo ssh-keygen -t rsa -b 4096

The key below was generated for this post and is not used online. Keys are like physical keys, people who have them and know where to use them can use them.

Output:

Generating public/private rsa key pair.
Enter file in which to save the key (/username/.ssh/id_rsa): ./server
Enter passphrase (empty for no passphrase): ********
Enter same passphrase again: ********
Your identification has been saved in ./server.
Your public key has been saved in ./server.pub.
The key fingerprint is:
SHA256:sxfcyn4oHQ1ugAdIEGwetd5YhxB8wsVFxANRaBUpJF4 [email protected]
The key's randomart image is:
+---[RSA 4096]----+
| .oB**[email protected]       |
|  +.==B.+        |
| o .o+o+..       |
|  .. +..o...     |
|    o ..Sooo.    |
|         ++o.    |
|        .o+o     |
|        .oo .    |
|         ...     |
+----[SHA256]-----+

The two files were created

server
server.pub
  • “server” is the private key
  • “server.pub“is the public key

Public/Private Key Contents

Public Key Contents (“server.pub”)

ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQC7Xo9bOCXJ7gVjP8tKOxHVId3KTo5I0VRU/kSRK3+mGd5VbDbQABo3tdWzYhzkjODzRS9TeL2dcLAQNNQKshi9IW5IGDS1NocSCLFQId5BFr9s3E79fkWqcZkKmwocepXOOZ91EDKgIFxviOzZKe99sdxxMoZzi1nxgVyXl4TnaelyiQxeKYniVs1iqDfYWQCxkKsmYit8TvGtOwrhLvKNh9362/y5ebpXVdFlRuB83eF7k8RHNYCQyOJJVx4cw3TIsAN0GMOwjuaOZbp7rR1d6k7RZmaApNRTbaWOXy32UiBST5TV/jXF2UL/4IBnn+yvCrM0v79e/3omgjlVVKfWByFzMv/YlBKCAX3xxtJQ9RkzTqseKupXmmJU0rik6Xuz31N2oyw4M7yJofSUGVCN0pnpKEvnKxqolfD9egdQy2XDaNioY7cvOO1qRegCKE0sDh1m5MzJWMhbDs7macSMyd6+0O5qWc/ByHy0G/mVbd8kO4jIuEzEs4IFkPCToEZp7KfkY7KRkOhccLbQ4ApCesUfBtGGAN1f33NnXCHae3Cx46nSd23fvgDZUVnjI47tNJH5Z8FNVlW/fp5Rgeu/aPUephnDX2IBxwIKQOmSTDY+nxU4V+c93H1gSOJfvqYbVKIAXKyN9Yh6LC44ZvLrL4q0TC0QlH2+kxMLnuj7zw== [email protected]

Private Key Contents (“server”), always keep the private key safe and never publish it.

-----BEGIN RSA PRIVATE KEY-----
Proc-Type: 4,ENCRYPTED
DEK-Info: AES-128-CBC,D34670C40CE3778974BEF97094010597

b4oecyqLsWt9n+G12ldVNlaQxSKF1wSrlBPg6FGiHRauTCyreUwoI2dMOAkwnGmN
8fcy51fH7D3Kg0G9fWWNPd+oUDwZmrpB8Mv6Ndk4bLYZEbkNOFgvPwNre7edTBOD
JGZRdWqb+yrywgvz3iTXPNjNK5REU3u3JmD69jInFNo92j765QQKA4sFgEyD/8g+
zg8yefIQAhEsVELC5LXPPyuTfA+x0Q+040PqCJ+FCISJI1CeZjLwk7Fbe453Vj81
zaDsurl5X5gaRUlVjB2asr6etWdMLWcalX4Nbyj2A10L3J4ONjKq3Wc2muJ0Q6ES
oNqBaU2iHPlK8yK0TGj/ERfjaG1qdlhBcow0pSapRqGopXBuVBLVuyc2NHe5CCTk
Ezq+LZGsVYmiOIIY4QRJdEN/DVLFHRGK/xA9A7unm484zXIEO6wznE0DuCTtyZs0
luJ3bKLRcack3K1Dphq0LjSG4YxQlkHewa9k9AKpDPTqeeKKckySakiDCGPT6htk
VqaCKrApAt6GQ2hLVXZ0BFVN5A3WUJ5s+HpFvTUzHTNZcdsVS4PgxhuCtnSO/BdS
/G+ODc4aZJNYQD9QQfWUnxkgnQJCWJ+aBZtKF7eDPRYY7qD9jWxubDzrFplBkmAi
O+aX5N8dpU3lEty4INjyh5LpgZW3swjUhEKWi/c1k+Qd1gCWzYzwAq2BfpWcF8Z+
c+y9lQUKbq2yDlxReCIsfb/hda5k1HjgaUlhKbjWIITSlGqf/NE9i+vj0rQEMQXQ
mxBoilfLUPd5A1ttG5XvqC2ex5HBmjzCazZ13Z/2c/PkwicHBmrf5bKYHZp49niV
44n8tZRamCUv6HaJUaKR22MigOG/qGppGPodGeLNj1DFLYAEQ78SYcVhEqIICBo1
t1yaIemUq8MWXSZz1K3cP4FEXQcEziQxFLU/0DCE0P0mIU3MExUmjB/nVE8vxb5l
p3ej3yrRGe+P2neco2gttgaTEi6l/S+0TIiZNstnVPG48BPW71mwVg9XR1d+avO7
OpXt0UgocX0xp7zBgK2up8Ai6v66WwjoNgyvFe02aK4/+fSC+aJ5D6N7JVNxd/bn
Py4W8oLKnrE1PKtIfBw/aE+rgudaMIyuxCaLllRKyDxVPPiJFp2iFcH/Y+k+0vDa
xE9Jpdd0zOWkZyebAxrS8zAUUNNaTQ+rWkj/zORjE4ptHpdwdazzHoQwIs+1kjsv
e/+JEmoskH7XozLnxClVhhWMXWfgQsPWBqPnGzieW0tv9SeIAU/BLJCHJRhBMAT1
ugBtcda1VMlAPVroYtVyUdCxkYZqGfIDbKqtOvvuBgUIUe/HnC3ExQQycC9F05BH
RJibaM/11MLTcZSO7KOK65Dg2v3VBhe6rfDl4tTR0yOySPXCacb9aMt2pMPTEe0/
wU49wCefchfD2bsR3kXPpUqm+HbkHORpIwsMZfQO/8dooXYdiYUdzV9roXG6OGVQ
SsV/xR2lE3XrR71TBegfRnQirI8tj4psSor+yCj3qV936Oh31D96Z6P4glshibsG
ffWAO/TSdu5ZV+UVahh6bTozs+g+odUu/S48TeI1fk7lPlqwZdjoSHXUI2v1FAQ2
jSSywuZQxHlGhg6OeI052cxx3zcVyVVLFHhIrfvufNc3c3+KYhtyiSzBNYN1BrJi
xNXwlDS1jYWgRHkf9zbNBU0MLTYHjZZvO9Jpl/UhKKBdIvJFwmGmXS2lgU6slunJ
Ojp4tY1tbI520KOskV/OoqEfmhXh5fTlI3onzoK1aLqxk1d0d65ONcxqVbAG79RN
b0Q5PgewSOgFlcZ7tEIZKAWsWVhjlFTSGRujdZVM1vZB9fCJesemai7HU0e4J+Do
tqvss8I2n6TPxlTYFzQ4w12pIiOzx/8cFLX78NLN8wQFElhhczeuW5HDAnmPxYhQ
eLY0HgDCFSvVAvGXo0j1gcBUcOr/LzZSsJhxsB7FKyrUjlmD/7Y45WoKJj41bKL+
y4+iDhXyLBiqVClRijsguwiCkmPFiR7Bng2pglS0oIWPWu1UbTJWVJPfuUTOBC+M
4/2fBtgFjUz8iUISs9ncEKkERlxodBIu+ekgLJZAigSMvUKfGE1YB1AA9x96VLjd
VJSjjWvnhMEoSwNzlNQ9+dhoD5Cg9zicgIIKnHnovYGOu8g9ZWfvhJFrKZgkfLRv
r2KgkWiHWpf0swiyGUOlGJDe39nMMkoxib7XE/J3VI3na1ZUOIf8kl9kdHXJ0R3C
2IjdbfiFHEDOrakp5oeVf8BbLK7RB8OlxgJAS47Byh8j97U7f13A5ZYlK3bkZ7E4
h7mCJQozgWP81ut0d9WUlcKp5M8yg2ctZ7h4oeG4Js4ceHqd19Z4P+1xWKwXcdmV
+uhiTftevTu3/UhYQVV4ck98C9pursJJYL5hTnIIpTSWIR+jSahhtzUy/upjugPp
cKi6eGlOkcHdKNRtiu7/IZqni85fC8PAwPZ93SICdiq6BpGaGWFh046weIJuflSK
Pd76+M70YRd+pkaRjJyFJ3hLyg7W5mlOb1+yBIlXKzpbch9B5E4dRHCcOsg4+v/9
exRgAnvUIhR/GpSySDDwgKHg8rAyjjoGeZFH3TJIemAAimyaR608a9tCn7SxVobs
UQlZ9WwC0dQIEv7mSvSige3imbybPtCoBHJAqsJqKCFJEDWbIF5l2VYZcfJUYaEI
oZAJHYGnZm33yQ6eSOusXJ2SnnGZ+ZsGO4bDVSwN20FkSt11gN8Wjrki9CxeVQp7
dWbKX1r/lZw74yUB4cYN23hgLJsdqvM7THzwlBkVtgV74RGY0qv59ecBUSQedlSK
dkOnkmoCiGRSNyf+ebijQaygnfK0ArG5wiRF/RQWiPFj7S6DHRxIOrXqcmvhJ7Ly
NApn9pPYyoZEAbk82MAXkapZ5+YLIKLjdNsYuKq5xVty+mc+FfxLWmZGX+QQinra
Z9DfY9KQw4rxJ/ju4ILnDrygm/QBsNFXBojOuzOIULt7c26s3d/47T+IXA4SIX4v
cPqYa6S3PU/Yoe5/Ya3tFxXmBXgEgVLZuujMs7dyCOAqLEyBEHYqIclp+TElWQLR
V660fczVXeedfd2tNBy1IBj1vhGa9j5mZLbFwTczykwCFfihLIrxSEc1MQA4CaSX
-----END RSA PRIVATE KEY-----

The Public and Private keys is used to encrypt all Telnet/SSH connections and traffic to your server. Keep these key’s private.

fyi: Putty can create SSH Keys too

If you do not have a Linux computer or Linux server to generate keys the Putty generator can create keys too.

Puttygen generating a key based on the randomness of mouse movements.

I did not know Putty can create keys.

Do save the public and private key(s) that were generated in Puttygen (tip: PPK files are what we are after along with the public key later in this post).

Public keys are added to your server when you deploy them. On Linux, you can add new Pulic keys after deployment by adding them to this file “~/.ssh/authorized_keys” to allow people to log in.

Puttygen does format the keys differently than how Ubuntu generates them. Read more here. I’ll keep generating keys in Linux over Puttygen.

Output of the public and PPK files from Puttygen

Putty SSH Client on Windows

Putty is a free windows program that you can use to connect to serves via SSH. Download and install the Putty program.

Open Putty

Putty Icon

Default Putty User Interface.

Screenshot of the Putty Program

To create a connection add an exiting IP address (server name) and SSH port (22) to Putty.

Screenshot of an IP and port entered into putty

In Putty (note the tree view to the left of the image), You can set the auto login name to use to log into the remote server under the Connection the Data in the tree view item

Screenshot showing the SSH usename being added to putty under Connection then Data menu,

You can also set the username under the Connection then Rlogin section of Putty.

Set the usernmae undser rlogin area of putty

OK, lets add the private SSH Key to Putty.

Putty Screehshot showing no support for standard SSH keys (only PPK files)

It looks like Putty only supports PPK private key files not ones generated by Linux. I used to be able to use the private key in the VSSH program on OSX and add the private key to connect to the server over SSH. Putty does not allow you to use Linux generated Private keys directly.

Convert your (Linux generated) private key to (Putty) PPK format with Puttygen

Putty comes with a Key Generator/Converter, you can open your existing RSA private key and convert it (or generate a new one).

TIP: If you generate a key in Puttygen don;t forget to ad’d it to your authorized host file in your remote server.

Open Puttygen

Puttygen icon

Click Conversions than Import Key and choose the private key you generated in Linux

Screenshot showing import RSA key to convert

The private key will be opened

Screenshot of imported RSA key

You can then save the private key as a PPK file.

Save the private key as a PPK file
“server.ppk” Key contents (sample key)
PuTTY-User-Key-File-2: ssh-rsa
Encryption: aes256-cbc
Comment: imported-openssh-key
Public-Lines: 12
AAAAB3NzaC1yc2EAAAADAQABAAACAQC7Xo9bOCXJ7gVjP8tKOxHVId3KTo5I0VRU
/kSRK3+mGd5VbDbQABo3tdWzYhzkjODzRS9TeL2dcLAQNNQKshi9IW5IGDS1NocS
CLFQId5BFr9s3E79fkWqcZkKmwocepXOOZ91EDKgIFxviOzZKe99sdxxMoZzi1nx
gVyXl4TnaelyiQxeKYniVs1iqDfYWQCxkKsmYit8TvGtOwrhLvKNh9362/y5ebpX
VdFlRuB83eF7k8RHNYCQyOJJVx4cwnTIsAN0GMOwjuaOZbp7rR1d6k7RZmaApNRT
baWOXy32UiBST5TV/jXF2UL/4IBnn+yvCrM0v79e/3omgjlVVKfWByFzMv/YlBKC
AX3xxtJQ9RkzTqseKupXmmJU0rik6Xuz31N2oyw4M7yJofSUGVCN0pnpKEvnKxqo
lfD9egdQy2XDaNioY7cvOO1qRegCKE0sDh1m5MzJWMhbDs7macSMyd6+0O5qWc/B
yHy0G/mVbd8kO4jIuEzEs4IFkPCToEZp7KfkY7KRkOhccLbQ4ApCesUfBtGGAN1f
33NnXCHae3Cx46nSd23fvgDZUVnjI47tNJH5Z8FNVlW/fp5Rgeu/aPUephnDX2IB
xwIKQOmSTDY+nxU4V+c93H1gSOJfvqYbVKIAXKyN9Yh6LC44ZvLrL4q0TC0QlH2+
kxMLnuj7zw==
Private-Lines: 28
DkpbM78GgGBSgfs9MsmZwDJj6HFXdoe+fCP1rLnwbE99mvU6Fbs23hXd+FsVdQbb
VR5tKTocV7tEwGjtLCHSTSF6gap0l4ww0Ecuvr/Dra2CJ2BsntyssBrWnlUT7OlA
M9zKQAzywAy4AHkph0YvH4l7BcJ5V1pUltm2JDTU6+iFqXDsstUUEDcQ4u0EalWU
EEsW+quNSwO0HBHvWY6N7tbiuEN9L+cFYIdsJEDfqM4hNi+7Ym+SQq5FOPyA6gXa
vhujsjPQAWI3TFxh7EIvsPDMCXxWHL6qaDvOMmPPTZDbEvm4nQ5Kax9jWacPILn7
ezc7ZAiZdDiFbkF3TLyuHx71mjChZgLoZLWYfXR3MBEEYnkNO/7oSMRUwDzEyWKW
ZgqdUtGg0cR+qWvaxQTDQsN/DjB7jGgnlreF92S8xSsbk5GgpZnTQ1V0cm3oecB+
+JP90K4Fi979gPWnwTfg6ZvmLUiVz3uBbvegkT9CVZhhZXSKq53H+SjTZfKBPrM9
NHGLkYr1WjToGR39LMrh4X3KChGewMFyuxtpkEQV60eCnHZBHgTco2A0yriRprOP
Ks4qJXOtZnsMYMesUDX9W5wLc4HcRvRh2UBPw/8bPz6mNrBk8j5SPIwBrPMIBejd
4IPoYezaEFKPg2bP7dn+Nftz5CGagcV2g+zhE615dsWzX1P7yu/1dTmz9LXaMmN6
d+zJE8TtjeaoW5NE1H3Flj9rknzJW7xQfokhS5hMkOg6J0AA6Pk13tupu8WHMkVB
x7nVu876f8tIbT8GzXCGgSl+zS7IJO3pt9T9QHIYa+T3oTIUqfBfK1WffUZwHMRn
Xn/VKUtIIIPiVfCtQuxSrTiJzQcoJ/yvfv62YAGv2LsDlBoHfXRdf6h3TCCCOVxT
WE45sbj3gJ1Cgjt1SEd/8A3hkstn2U2NKBI9gkB9H5BbDJoAXq6/4CkwaQvSEzs7
LK5btRlWop+7gqkyMPpgxv9li9IEDJ999ufMqxkFgOBkmkR5Si71elXRnwiKrjfU
Ce14iy7Dd7lb7IU9OEBjWFZlSigVEnc8klhGHDuxnojiW1ld7pUDIkAAbdTMOFON
abcpfNwcg5Y3l+1KwIQHuewAUuA9472jV4V9EAn7pJ7wgmYHbzMzg9Z9dM8h/3UI
axBzAW+cJM80gN+nZMbmDC9FkXV16GSuqC2iQUVGb2TIheAS7oCR+JFZFQNv0ytF
rGQ9K1wIGbMI4oDPcAid7DzrEXVl3d2x8MtwF/WzfHehVJD1h1uNwezLf1gBKyas
9GBfDOYwd8zgaL2H99GYD1Ba7TePJY81mx7m10eYdwDj1vCpboKE3cE6AyL7ki+4
Ix7GSzQs9NBckF9+8eVXe2T4Cc450hIoN0BWcxVUUdGCA1skZ1PczPs1z/ae4lxd
l5WmPy8Gyh7cnZpyqzvwAPSFDadkNP60eekfkRHyo4QyLhj7QZtO0kOgWhT3CHma
FjZ5jJu59U/4gc0TpQ8ra3vgKQKudloExsg027+34nR98dN+zzUj4S2C/J34W98C
DEEu/SO7nfW/a2UARXBKWCbS+3j24zHc9dbgX2tZoAoInUvRGiSOsLVsMhDiBoyb
wWoNxrKPR3Fi5zZ+GfDUgUGpZoW/b54KnFouIHBYbI41Gkh4vj6lxOGh/sb3SPHd
Wg6EN/0z/mer3bG0a2/ZHKYA5KGWRXWYvYLz4Je8fb/egBrSU6BztwSNeilzA9lI
J4BO7pzXECnWYutB14UxHw==
Private-MAC: 12298fa865ac574da81898252e83b812200cba59

Now the PPK key can be added to Putty for any server connection that uses the public key. Use the right key for the right server though.

Add the private key to a Putty server by clicking Connection, SSH, AUTH section and browing to the PPK file.

Screenshot showing the PPK key file added to Putty

Now we need to save the connection, click back on the Session note at the top of the treeview, type a server name and click Save

Save Putty connection.

Connecting to your sever via Telnet/SSH wiht Putty.

Once you have added a server name, port, usernames and private key to Putty you can double click the server list item to connect to your server.

You will see a message about accepting the public key from the server. Click Yes. This fingerprint will be the same fingerprint that was shown when you generated the keys (if not maybe someone is hacking in the middle of your local computer and server)

Putty messgae box asking to to remember the public key

Hopefully, you will now have full access to your server with the account you logged in with.

Screenshot of an Ubuntu screen after login

Happy Coding.

Alternatives to self-managed VM’s

I will always run self-managed server (and configure it myself) as its the most economical way to build a fast and secure server in my humble opinion.

I have blogged about alternatives but these solutions always sacrifice something and costs are usually higher and performance can be slower.

I am also lucky enough I can do this as a hobby and its not my day job. when you self manage a VM you will have endless tasks or securing your server and tweaking but its fun.

More Reading

Read some useful Linux commands here and read my past guides here. If you want to buy a domain name click here.

If you are bored and want to learn more about SSH Secure shell read this.

Related Blog Posts

  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Useful Linux Terminal Commands
  • Setup two factor authenticator protection at login (SSH) on Ubuntu or Debian
  • etc

Version: 1.1 Added MobaXterm link

Filed Under: 2FA, Authorization, AWS, Cloud, Digital Ocean, Linux, Putty, Secure Shell, Security, Server, SSH, Ubuntu, UpCloud, VM, Vultr Tagged With: Connecting, Putty, secure, server, Shell, ssh

No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider

August 5, 2018 by Simon

No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider

Aside

If you have not read my previous posts I have now moved my blog to the awesome UpCloud host (signup using this link to get $25 free UpCloud VM credit). I compared Digital Ocean, Vultr and UpCloud Disk IO here and UpCloud came out on top by a long way (read the blog post here). Here is my blog post on moving from Vultr to UpCloud.

Buy a domain name here

Domain names for just 88 cents!

Now on with the post.

Murphy’s Law

I recently had an issue where I set up a website for a friend. I invested 6 hours into setting up..

I setup…

  • Debian OS
  • NGINX Web Server
  • MySQL Database
  • PHP 7.2
  • PHP-FPM Child Workers
  • HTTPS Certificate
  • Security (Firewall/Headers/SSH, WordPress, Plugins etc).
  • Installed WordPress and Plugins
  • Setup DNSSEC
  • Etc

I had tested GTMetrix scores = less than 1 second.  Security headers were tested and I was happy with the site.

The server and backups were automatically deleted after 7 days while I was away from my keyboard because I assumed the account was valid and had credits.

Lesson Learned

  • Always have a backup (of the server, setup/ www, MySQL etc).
  • Script setups (Ansible, Puppet or Scripts) to sate time redeploying if need be.
  • Backups are not always available.
  • Do have setup documented (Check)
  • Do have a disaster plan

I have guides on setting up a server on UpCloud, AWS, Vultr, Digital Ocean but setting up can be rather repetitive so how can you prevent resetting up servers?

Why Plan for the Worst

  • Companies disappear.
  • Some hosts go down.
  • Some hosts have weird trial modes and internal process that could take your site down.
  • Human error?
  • Murphy’s Law

How I will prevent this in future

  1. I am building a Java desktop app for Windows/OSX/Linux app that will deploy and set up on UpCloud/Vultr/Digital Ocean providers and allow for 1 click deploy and backup and restore.
  2. I am going to re-establish replication between servers with RSync etc.
  3. I am going to start to automate installs and environments.
  4. I am going to set up a hot (ready to go) Green/Blue mirrored environments (www and DB server) on different providers in case of emergency. Then I can set the active live servers with DNS (blog posts soon).
  5. Consider a server farm (same provider or different providers)

I hope this guide helps someone.

Please consider using my referral code and get $25 UpCloud VM credit if you need to create a server online.

https://www.upcloud.com/register/?promo=D84793

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v1.0 Initial Post

Filed Under: Backup, Disaster Recovery, Restore, Security, Server, VM Tagged With: a, are, different, have, hot, I, matter, No, on a, provider, ready, recommend, server-provider, spare, strongly, Using, what, you

Setup a dedicated Debian subdomain (VM), Install MySQL 14 and connect to it from a WordPress on a different VM

July 21, 2018 by Simon

This is how I set up a dedicated Debian subdomain (VM), Installed MySQL 14 and connected to it from a WordPress installation on a different VM

Aside

If you have not read my previous posts I have now moved my blog to the awesome UpCloud host (signup using this link to get $25 free UpCloud VM credit). I compared Digital Ocean, Vultr and UpCloud Disk IO here and UpCloud came out on top by a long way (read the blog post here). Here is my blog post on moving fearby.com from Vultr to UpCloud.

Buy a domain name here

Domain names for just 88 cents!

Now on with the post.

Fearby.com

I will be honest, fearby.com is my play server where I can code, learn about InfoSec and share (It’s also my stroke rehab blog).

There is no faster way to learn than actually doing. The problem is my “doing” usually breaks the live site from time to time (sorry).

I really need to set up a testing environment (DEV-TEST-LIVE or GREEN-BLUE) server(s). GREEN-BLUE has advantages as I can always have a hot spare ready. All I need to do is toggle DNS and I can set the GREEN or BLUE server as the live server.

But first  I need to separate my database from my current fearby.com server and setup up a new web server. Having a Green and Blue server that uses one database server will help with near real-time production website switches.

Dedicated Database Server

I read the following ( Should MySQL and Web Server share the same server? ) at Percona Database Performance Blog. Having a separate database server should not negatively impact performance (It may even help improve speeds).

Deploy a Debian VM (not Ubuntu)

I decided to set up a Debian server instead of Ubuntu (mostly because of the good focus on stability and focus on security within Debian).

I logged into the UpCloud dashboard on my mobile phone and deployed a Debian server in 5 mins.  I will be using my existing how to setup Ubuntu on UpCloud guide (even though this is Debian).

TIP: Sign up to UpCloud using this link to get $25 free UpCloud VM credit.

Deploy Debian Sevrer

Deploy a Debian server setup steps:

  1. Login to UpCloud and go to Create server.
  2. Name your Server (use a fully qualified domain name)
  3. Add a description.
  4. Choose your data centre (Chicago for me)
  5. Choose the server specs (1x CPU, 50GB Disk, 2GB Memory, 2TB Traffic for me)
  6. Name the Primary disk
  7. Choose an operating system (Debian for me)
  8. Select an SSH Key
  9. Choose misc settings
  10. Click Deploy server

After 5 mins your server should be deployed.

After Deploy

Setup DNS

Login to your DNS provider and create DNS records to the new IP’s (IPv4 and IPv6) provided by UpCloud. It took DNS 12 hours to replicate to my in Australia.

Add a DNS record with your domain registra A NAMe = IPV4 and AAAA Name = IPv6

Setup a Firewall (at UpCloud)

I would recommend you set up a firewall at UpCloud as soon as possible (don’t forget to add the recommended UpCloud DNS IP’s and any whitelisted IP’s your firewall).

Block everything and only allow

  • Port 22: Allow known IP(s) of your ISP or VPN.
  • Port 53: Allow known UpCloud DNS servers
  • Port 80 (ALL)
  • Port 443 (ALL)
  • Port 3306 Allow your WordPress site and known IP(s) of your ISP or VPN.

Read my post on setting up a whitelisted IP on an UpCloud VM… as it is a good idea.

UpCloud thankfully has a copy firewall feature that is very handy.

Copy Firewall rules option at UpCloud

After I set up the firewall I SSH’ed into my server (I use vSSH on OSX buy you could use PUTTY).

I updated the Debian system with the following  command

sudo apt update

Get the MySQL Package

Visit http://repo.mysql.com/ and get the URL of the latest apt-config repo deb file (e.g “mysql-apt-config_0.8.9-1_all.deb”). Make a temp folder.

mkdir /temp
cd /temp

Download the MySQL deb Package

wget http://repo.mysql.com/mysql-apt-config_0.8.9-1_all.deb

Install the package

sudo dpkg -i mysql-apt-config_0.8.9-1_all.deb

Update the system again

sudo apt update

Install MySQL on Debian

sudo apt install mysql-server
Reading package lists... Done
Building dependency tree
Reading state information... Done
The following additional packages will be installed:
libaio1 libatomic1 libmecab2 mysql-client mysql-common mysql-community-client mysql-community-server psmisc
The following NEW packages will be installed:
libaio1 libatomic1 libmecab2 mysql-client mysql-common mysql-community-client mysql-community-server mysql-server psmisc
0 upgraded, 9 newly installed, 0 to remove and 1 not upgraded.
Need to get 37.1 MB of archives.
After this operation, 256 MB of additional disk space will be used.
Do you want to continue? [Y/n] y
Get:1 http://repo.mysql.com/apt/debian stretch/mysql-5.7 amd64 mysql-community-client amd64 5.7.22-1debian9 [8886 kB]
Get:2 http://deb.debian.org/debian stretch/main amd64 mysql-common all 5.8+1.0.2 [5608 B]
Get:3 http://deb.debian.org/debian stretch/main amd64 libaio1 amd64 0.3.110-3 [9412 B]
Get:4 http://deb.debian.org/debian stretch/main amd64 libatomic1 amd64 6.3.0-18+deb9u1 [8966 B]
Get:5 http://deb.debian.org/debian stretch/main amd64 psmisc amd64 22.21-2.1+b2 [123 kB]
Get:6 http://deb.debian.org/debian stretch/main amd64 libmecab2 amd64 0.996-3.1 [256 kB]
Get:7 http://repo.mysql.com/apt/debian stretch/mysql-5.7 amd64 mysql-client amd64 5.7.22-1debian9 [12.4 kB]
Get:8 http://repo.mysql.com/apt/debian stretch/mysql-5.7 amd64 mysql-community-server amd64 5.7.22-1debian9 [27.8 MB]
Get:9 http://repo.mysql.com/apt/debian stretch/mysql-5.7 amd64 mysql-server amd64 5.7.22-1debian9 [12.4 kB]
Fetched 37.1 MB in 12s (3023 kB/s)
Preconfiguring packages ...
Selecting previously unselected package mysql-common.
(Reading database ... 34750 files and directories currently installed.)
Preparing to unpack .../0-mysql-common_5.8+1.0.2_all.deb ...
Unpacking mysql-common (5.8+1.0.2) ...
Selecting previously unselected package libaio1:amd64.
Preparing to unpack .../1-libaio1_0.3.110-3_amd64.deb ...
Unpacking libaio1:amd64 (0.3.110-3) ...
Selecting previously unselected package libatomic1:amd64.
Preparing to unpack .../2-libatomic1_6.3.0-18+deb9u1_amd64.deb ...
Unpacking libatomic1:amd64 (6.3.0-18+deb9u1) ...
Selecting previously unselected package mysql-community-client.
Preparing to unpack .../3-mysql-community-client_5.7.22-1debian9_amd64.deb ...
Unpacking mysql-community-client (5.7.22-1debian9) ...
Selecting previously unselected package mysql-client.
Preparing to unpack .../4-mysql-client_5.7.22-1debian9_amd64.deb ...
Unpacking mysql-client (5.7.22-1debian9) ...
Selecting previously unselected package psmisc.
Preparing to unpack .../5-psmisc_22.21-2.1+b2_amd64.deb ...
Unpacking psmisc (22.21-2.1+b2) ...
Selecting previously unselected package libmecab2:amd64.
Preparing to unpack .../6-libmecab2_0.996-3.1_amd64.deb ...
Unpacking libmecab2:amd64 (0.996-3.1) ...
Selecting previously unselected package mysql-community-server.
Preparing to unpack .../7-mysql-community-server_5.7.22-1debian9_amd64.deb ...
Unpacking mysql-community-server (5.7.22-1debian9) ...
Selecting previously unselected package mysql-server.
Preparing to unpack .../8-mysql-server_5.7.22-1debian9_amd64.deb ...
Unpacking mysql-server (5.7.22-1debian9) ...
Setting up libatomic1:amd64 (6.3.0-18+deb9u1) ...
Setting up psmisc (22.21-2.1+b2) ...
Setting up mysql-common (5.8+1.0.2) ...
update-alternatives: using /etc/mysql/my.cnf.fallback to provide /etc/mysql/my.cnf (my.cnf) in auto mode
Setting up libmecab2:amd64 (0.996-3.1) ...
Processing triggers for libc-bin (2.24-11+deb9u3) ...
Setting up libaio1:amd64 (0.3.110-3) ...
Processing triggers for systemd (232-25+deb9u4) ...
Processing triggers for man-db (2.7.6.1-2) ...
Setting up mysql-community-client (5.7.22-1debian9) ...
Setting up mysql-client (5.7.22-1debian9) ...
Setting up mysql-community-server (5.7.22-1debian9) ...
update-alternatives: using /etc/mysql/mysql.cnf to provide /etc/mysql/my.cnf (my.cnf) in auto mode
Created symlink /etc/systemd/system/multi-user.target.wants/mysql.service -> /lib/systemd/system/mysql.service.
Setting up mysql-server (5.7.22-1debian9) ...
Processing triggers for libc-bin (2.24-11+deb9u3) ...
Processing triggers for systemd (232-25+deb9u4) ...

Secure MySQL

You can secure the MySQL server deployment (set options as needed)

sudo mysql_secure_installation

Enter password for user root:
********************************************
VALIDATE PASSWORD PLUGIN can be used to test passwords
and improve security. It checks the strength of password
and allows the users to set only those passwords which are
secure enough. Would you like to setup VALIDATE PASSWORD plugin?

Press y|Y for Yes, any other key for No: No
Using existing password for root.
Change the password for root ? ((Press y|Y for Yes, any other key for No) : No

... skipping.
By default, a MySQL installation has an anonymous user,
allowing anyone to log into MySQL without having to have
a user account created for them. This is intended only for
testing, and to make the installation go a bit smoother.
You should remove them before moving into a production
environment.

Remove anonymous users? (Press y|Y for Yes, any other key for No) : Yes
Success.

Normally, root should only be allowed to connect from
'localhost'. This ensures that someone cannot guess at
the root password from the network.

Disallow root login remotely? (Press y|Y for Yes, any other key for No) : No

... skipping.
By default, MySQL comes with a database named 'test' that
anyone can access. This is also intended only for testing,
and should be removed before moving into a production
environment.

Remove test database and access to it? (Press y|Y for Yes, any other key for No) : Yes
- Dropping test database...
Success.

- Removing privileges on test database...
Success.

Reloading the privilege tables will ensure that all changes
made so far will take effect immediately.

Reload privilege tables now? (Press y|Y for Yes, any other key for No) : Yes
Success.

All done!

Install NGINX

I installed NGINX to allow Adminer MySQL GUI to be used

I ran these commands to install NGINX.

sudo apt update
sudo apt upgrade
sudo apt-get install nginx

I edited my NGINX configuration as required.

  • Set a web server root
  • Set desired headers
  • Optimized NGINX (see past guides here, here and here)

I reloaded NGINX

sudo nginx -t
sudo nginx -s reload
sudo systemctl restart nginx

Install PHP

I followed this guide to install PHP on Debian.

sudo apt update
sudo apt upgrade

sudo apt install ca-certificates apt-transport-https
wget -q https://packages.sury.org/php/apt.gpg -O- | sudo apt-key add -
echo "deb https://packages.sury.org/php/ stretch main" | sudo tee /etc/apt/sources.list.d/php.list

sudo apt update
sudo apt install php7.2
sudo apt install php-pear php7.2-curl php7.2-dev php7.2-mbstring php7.2-zip php7.2-mysql php7.2-xml php7.2-cli php7.2-common

Install PHP FPM

apt-get install php7.2-fpm

Increase Upload Limits

You may need to temporarily increase upload limits in NGINX and PHP before you can restore a WordPress database. My feabry.com blog is about 87MB.

Add “client_max_body_size 100M;” to “/etc/nginx/nginx.conf”

Add the following to “/etc/php/7.2/fpm/php.ini”

  • post_max_size = 100M
  • upload_max_filesize = 100M

Restore a backup of my MySQL database in MySQL

You can now use Adminer to restore your blog to MySQL. Read my post here on Adminer here. I used Adminer to move my WordPress site from CPanel to a self-managed server a year ago.

First login to your source server and export your desired database then login to the target server and import the database.

Firewall Check

Don’t forget to allow your WordPress site’s 2x Public IP’s and 1x Private IP to access port 3306 in your UpCloud Firewall.

How to check open ports on your current server

sudo netstat -plunt

Set MySQL Permissions

Open MySQL

mysql --host=localhost --user=root --password=***************************************************************************

I ran these statements to grant the user logging in on the nominate IP’s access to MySQL.

mysql>
GRANT ALL ON databasenmae.* TO [email protected] IDENTIFIED BY '***********sql*user*password*************';
GRANT ALL ON databasenmae.* TO [email protected] IDENTIFIED BY '***********sql*user*password*************';
GRANT ALL ON databasenmae.* TO [email protected] IDENTIFIED BY '***********sql*user*password*************';
GRANT ALL ON databasenmae.* TO [email protected] IDENTIFIED BY '***********sql*user*password*************';

Reload permissions in MySQL

FLUSH PRIVILEGES;

Allow access to the Debian machine from known IP’s

Edit “/etc/host.allow”

Additions (known safe IP’s that need access to this MySQL remotely).

mysqld : IPv4Server1PublicAddress : allow
mysqld : IPv4Server1PrivateAddress : allow
mysqld : IPv4Server2PublicAddress : allow
mysqld : IPv4Server1PrivateAddress : allow

mysqld : ALL : deny

Tell MySQL to listen on

Edit “/etc/mysql/my.cnf”

Added..

[mysqld]
user = mysql
pid-file = /var/run/mysqld/mysqld.pid
socket = /var/run/mysqld/mysqld.sock
port = 3306
basedir = /usr
datadir = /var/lib/mysql
tmpdir = /tmp
language = /usr/share/mysql/English
bind-address = DebianServersIntenalIPv4Address

I guess you could change the port to something random???

Restart MySQL

sudo service mysql restart

Install a second local firewall on Debian

Install ufw

sudo apt-get instal ufw

Do add the IP of your desired server or VPN to access SSH

sudo ufw allow from 123.123.123.123 to any port 22

Do add the IP of your desired server or VPN to access WWW

sudo ufw allow from 123.123.123.123 to any port 80

Now add the known IP’s (e.g any web servers public (IPv4/IPv6) or Private IP’s) that you wish to grant access to MySQL (e.g the source website that used to have MySQL)

sudo ufw allow from 123.123.123.123 to any port 3306

Do add UpCloud DNS Servers to your firewall

sudo ufw allow from 94.237.127.9 to any port 53
sudo ufw allow from 94.237.40.9 to any port 53
sudo ufw allow from 2a04:3544:53::1 to any port 53
sudo ufw allow from 2a04:3540:53::1 to any port 53

Add all other rules as needed (if you stuff up and lock your self out you can login to the server with the Console on UpCloud)

Restart the ufw firewall

sudo ufw disable
sudo ufw enable

Prevent MySQL starting on the source server

Now we can shut down MySQL on the source server (leave it there just in case).

Edit “/etc/init/mysql.conf”

Comment out the line that contains “start on ” and save the file

and run

sudo systemctl disable mysql

Reboot

shutdown -r now

Stop and Disable NGINX on the new DB server

We don’t need NGINX running now the database has been imported with Adminer.

Stop and prevent NGINX from starting up on startup.

/etc/init.d/nginx stop
sudo update-rc.d -f nginx disable
sudo systemctl disable nginx

Check to see if MySQL is Disabled

service mysql status
* mysql.service - MySQL Community Server
Loaded: loaded (/lib/systemd/system/mysql.service; disabled; vendor preset: enabled)
Active: inactive (dead)

Yep

Test access to the database server in PHP code

Add to dbtest.php

<em>SELECT guid FROM wp_posts</em>()<br />
<ul><?php

//External IP (charged after quota hit)
//$servername = 'db.yourserver.com';

//Private IP (free)
//$servername = '10.x.x.x';

$username = 'username';
$password = '********your*password*********';
$dbname = 'database';

// Create connection
$conn = new mysqli($servername, $username, $password, $dbname);
// Check connection
if ($conn->connect_error) {
    die("Connection failed: " . $conn->connect_error);
}

$sql = 'SELECT guid FROM wp_posts';
$result = $conn->query($sql);

if ($result->num_rows > 0) {
    // output data of each row
    while($row = $result->fetch_assoc()) {
        echo $row["guid"] . "<br>";
    }
} else {
    echo "0 results";
}
$conn->close();
?></ul>
Done

Check for open ports.

You can install nmap on another server and scan for open ports

Install nmap

sudo apt-get install nmap

Scan a server for open ports with nmap

You should see this on a server that has access to see port 3306 (port 3306 should not be visible by non-whitelisted IP’s).  Port 3shouldoudl not be seen via everyone.

sudo nmap -PN db.yourserver.com

Starting Nmap 7.40 ( https://nmap.org ) at 2018-07-20 14:15 UTC
Nmap scan report for db.yourserver.com (IPv4IP)
Host is up (0.0000070s latency).
Other addresses for db.yourserver.com (not scanned): IPv6IP
Not shown: 997 closed ports
PORT     STATE SERVICE
3306/tcp open  mysql

You should see something like this on a server that has access to see port 80/443 (a web server)

sudo nmap -PN yourserver.com

Starting Nmap 7.40 ( https://nmap.org ) at 2018-07-20 14:18 UTC
Nmap scan report for db.yourserver.com (IPv4IP)
Host is up (0.0000070s latency).
Other addresses for db.yourserver.com (not scanned): IPv6IP
Not shown: 997 closed ports
PORT     STATE SERVICE
80/tcp   open  http
443/tcp   open  https

I’d recommend you use a service like https://pentest-tools.com/network-vulnerability-scanning/tcp-port-scanner-online-nmap# to check for open ports.  https://hackertarget.com/tcp-port-scan/ is a great tool too.

https://www.infobyip.com/tcpportchecker.php is also a free port checker that you can use to verify individual closed ports.

Screeshot of https://www.infobyip.com/tcpportchecker.php

Hardening MySQL and Debian

Read: https://www.debian.org/doc/manuals/securing-debian-howto/ap-checklist.en.html

Configuring WordPress use the dedicated Debian VM

On the source server that used to have MySQL edit your wp-config.php file for WordPress.

Remove

define('DB_HOST', 'localhost');

add (read the update below, I changed the DNS IP to the Private IP to have free traffic)

//Oriinal localhost
//define('DB_HOST', 'localhost');

//New external host via DNS (Charged after quota hit)
//define('DB_HOST', 'db.fearby.com');

//New external host via Private IP (Free)
define('DB_HOST','10.x.x.x');

Restart NGINX

sudo nginx -t
sudo nginx -s reload
sudo systemctl restart nginx

Restart PHP-FPM

service php7.2-fpm restart

Conclusion

Nice, I seem to have shaved off 0.3 seconds in load times (25% improvement)

1sec gtmtrix load time

Update: Using a Private IP or Public IP between WordPress and MySQL servers

After I released this blog post (version 1.0 with no help from UpCloud) UpCloud contacted me and said the following.

Hello Simon,

I notice there's no mention of using the private network IPs. Did you know that we automagically assign you one when you deploy with our templates. The private network works out of the box without additional configuration, you can use that communicate between your own cloud servers and even across datacentres.

There's no bandwidth charge when communicating over private network, they do not go through public internet as well. With this, you can easily build high redundant setups.

Let me know if you have any other questions.

--
Kelvin from UpCloud

I will have updated my references in this post and replace the public IP address (that is linked to DNS record for db.fearby.com) and instead use the private ip address (e.g 10.x.x.x), your servers private IP address is listed against the public IPv$ and IPv6 address.

I checked that the local ufw firewall did indeed allow the private IP access to MySQL.

sudo ufw status numbered |grep 10.x.x.x
[27] 3306                       ALLOW IN    10.x.x.x

On my new Debian MySQL server, I edited the file /etc/mysql/my.cnf and changed the IP to the private IP and not the public IP.

Now it looked like

[mysqld]
user            = mysql
pid-file        = /var/run/mysqld/mysqld.pid
socket          = /var/run/mysqld/mysqld.sock
port            = 3306
basedir         = /usr
datadir         = /var/lib/mysql
tmpdir          = /tmp
language        = /usr/share/mysql/English
bind-address    = 10.x.x.x

(10.x.x.x  my Debian servers private IP)

On my WordPress instance, I edited the file  /www-root/wp-config.php

I added the new private host

//Oriinal localhost
//define('DB_HOST', 'localhost');

//New external host via DNS (Charged after quota hit)
//define('DB_HOST', 'db.fearby.com');

//New external host via Private IP (Free)
define('DB_HOST','10.x.x.x');

(10.x.x.x  my Debian servers private IP)

Alos on Debian/MySQL ensure you have granted access to the private IP of the WordPress server

Edit /etc/host.allow

Add

mysqld : 10.x.x.x : allow

Restart MySQL

sudo systemctl restart mysql

TIP: Enable UpCloud Backups

Do setup automatic backups (and or take manual backups). Backups are an extra charge but are essential IMHO.

UpCloud backups

Troubleshooting

If you can’t access MySQL log back into MySQL

mysql --host=localhost --user=root --password=***************************************************************************

and run

GRANT ALL PRIVILEGES ON *.* TO [email protected]'%' IDENTIFIED BY '***********sql*user*password*************''; FLUSH PRIVILEGES;

Reboot

Lower Upload Limits

Don’t forget to lower file upload sizes in NGINX and PHP (e.g 2M) now that the database has been restored.

I hope this guide helps someone.

TIP: Sign up to UpCloud using this link to get $25 free UpCloud VM credit.

https://www.upcloud.com/register/?promo=D84793

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v1.6 Changed Public IP use to private IP use to ensure we are not charged when the serves sage goes over the quota

v1.5 Fixed 03 type (should have been 0.3)

v1.4 added disable nginx info

v1.3 added https://www.infobyip.com/tcpportchecker.php

v1.1 added https://hackertarget.com/tcp-port-scan/

v1.0 Initial Post

Filed Under: Debian, MySQL, VM, Wordpress Tagged With: 14, a, and, Connect, debian, dedicated, different, from, install, MySQL, Setup, Subdomain, to, vm, wordpress

Set up a whitelisted IP on an UpCloud VM and WordPress using a VPN to get a static IP address

July 5, 2018 by Simon

This is how I set up a whitelisted IP on an UpCloud VM and WordPress using a VPN to get a static IP address

If you have not read my previous posts I have now moved my blog to the awesome UpCloud host (signup using this link to get $25 free UpCloud VM credit). I compared Digital Ocean, Vultr and UpCloud Disk IO here and UpCloud came out on top by a long way (read the blog post here). Here is my blog post on moving from Vultr to UpCloud.

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Before you begin

Take a backup of WordPress files + database and take a snapshot of your VM (see my UpCloud VM guide here).

Having a ready backup IS a good idea.

Screenshot of https://my.upcloud.com/server/details/

Why Whitelist

Whitelisting is not bulletproof but it is an important link in the security chain. Security is only as good or bad as the strength of your weakest link.

Using updated software, applying patches, using HTTPS, using a reliable host in a reliable location, using good passwords are equally important as IP filtering. Whitelisting IP’s goes a long way to ensuring you have least access privileges on connections.

Remember to scan your site with OWASP Zap, Qualys and Kali Linux too.

What IP’s are you going to Whitelist?

Q1) Does your ISP offer a static IP address (or a dynamic IP)?

My ISP does NOT provide a static IP by default (I can pay $20 a month for one (that’s too expensive)).

You can check your public IP by loading http://icanhazip.com/ (this will return your public IPV4 address).

Load https://ipv6.icanhazip.com/ to view your IPV6 IP (if you have one)

Q2) Do you need to whitelist IP addresses while on the go (Mobile)? If so I would recommend you whitelist a VPN’s IP or IP range.

Recently I had Apache web server auto-install and knock out my NGINX web server and I needed to login on a mobile device to investigate,  Luckily I whitelist my VPN’s IP and logged in from my mobile device and resolved the issue.

Use a VPN to get a static IP

If you don’t have a static IP or you want to connect to your site on the go (Mobile) you can set up a VPN and use their static IP

I was using http://cyberghostvpn.com/ to have a static IP but a server failure in Sydney caused my defined whitelisted IP to disappear so I change to https://protonvpn.com/ (as Cybergost were unable to provide known IP’s of VPN servers).

TIP: Don’t just whitelist one server, whitelist a few as you never know when a server will go down.

Here is a screenshot of the 1st VPN I tried (Cyberghost), Cyberghost VPN is connected to a specified server (Dallas).

Cyberghost VN screenshot connected to Dallas

I switched to ProtonVPN.

Here is a screenshot of ProtonVPN connected to a Switzerland server. Read more about Proton VPN here.

Screnshot of Protonvpn

I set Proton VPN to auto-start and connect to my desired server

Screenshot of Proton VNS startup settings

Proton VPN offered me a 7-day PLUS trial (All Countries, 5 devices, highest speed, secure core etc) after I started using the free version (3 countries, 1 device, speed low). I assume everyone gets the same PLUS trail offer.

You can view Proton plans and pricing here.

Ok, now that we know how to get a static IP, let’s configure some firewalls.

Network Firewall at UpCloud

I use the awesome UpCloud to hold my domains (read more about UpCloud performance here). You can log in to your UpCloud Dashboard and load the server list, click your server and then click Firewall and define firewalls.

Firewall: Open IPv4/IPv6 ports for:

  1. ICMP
  2. 53 (DNS)
  3. 80 (HTTP)
  4. 443 (HTTPS)

Only allow access to port 22 from whitelisted IP’s (or IP ranges)

Screenshot of UpCloud firewall screen at https://my.upcloud.com/server/details/

I like to set separate firewall rules for IPV4 and IPV6, for TCP or UDP and I limit rules to certain IP range and port.

Ubuntu Firewall

I also like to run a ufw firewall (more information on ufw) on my Ubuntu server (read this guide on securing Ubuntu in the cloud and running a Lynis audit).

Manually setup firewall rules in ufw.

sudo ufw allow from 1.2.3.4 to any port 22
sudo ufw allow from 1.2.3.5 to any port 22
sudo ufw allow from 1.2.3.6 to any port 22

Don’t forget t restart your firewall

sudo ufw disable
sudo ufw enable

Run a  local nmap scan to find open ports

nmap -v -sT localhost

Starting Nmap 7.60 ( https://nmap.org ) at 2018-07-04 22:30 AEST
Initiating Connect Scan at 22:30
Scanning localhost (127.0.0.1) [1000 ports]
Discovered open port 25/tcp on 127.0.0.1
Discovered open port 22/tcp on 127.0.0.1
Discovered open port 80/tcp on 127.0.0.1
Discovered open port 443/tcp on 127.0.0.1
Discovered open port 3306/tcp on 127.0.0.1
Completed Connect Scan at 22:30, 0.02s elapsed (1000 total ports)
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000086s latency).
Not shown: 995 closed ports
PORT     STATE SERVICE
22/tcp   open  ssh
25/tcp   open  smtp
80/tcp   open  http
443/tcp  open  https

Read data files from: /usr/bin/../share/nmap
Nmap done: 1 IP address (1 host up) scanned in 0.06 seconds
           Raw packets sent: 0 (0B) | Rcvd: 0 (0B)

Don’t be concerned if you see open ports from a local nmap scan (e.g port 22 or 3306), these are locally open.  We need to scan externally to see if these ports are opened.

Scan your site with an external nmap tool like pen-test-tools or here.

Screenshot of a public nmap scanYou should not have non-web based service ports freely open externally (web-based ports e.g 80 and 443 are ok)

Port 22 access should be whitelisted to select IP’s only. You should not have any database ports open externally.

Whitelisting WordPress Access

Download WordFence plugin for WordPress from https://www.wordfence.com/

Read more on downloading WordPress plugins from the command line here. Read my past Wordfence post here.

Once Wordfence is installed open the WordFence All Options screen  (/wp-admin/admin.php?page=WordfenceOptions).

Now you can add your static IP (or IP ranges) to the WordFence whitelist.

Picture of WordFence whitelist

Setup auto block for any non whitelisted Itryingng to login to /wp-login.php

I permanently ban any IP accessing my login page (there are many).

What to do with rejected IP connections?

Wordfence will block connections to WordPress. I’d suggest you setup fail2ban to block other unwanted connections at network level too.

Conclusion

You should now have a VM that will allow port 22 access by whitelisted IP’s and a WordPress that only allows logins from whitelisted IP’s.

Cons

  • If you forget to start your VPN you can’t log in to your VM via port 22 or log in to WordPress (excellent, this is by design).

Pros

  • Secure (need I say more)

I hope this guide helps someone.

Please consider using my UpCloud referral code and get $25 UpCloud VM credit for free when you signup to create a new VM.

https://www.upcloud.com/register/?promo=D84793

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v1.2 Added Proton plans link

v1.1 Added auto block WordFence option

v1.0 Initial Post

Filed Under: Firewall, Ubuntu, UpCloud, VM, Whitelist, Wordpress, WP Security

Adding two sub domains (one pointing to a new UpCloud VM and the other pointing to an NGINX subsite) on Ubuntu 18.04

June 27, 2018 by Simon

Here is how I added two subdomains (one pointing to a new UpCloud VM and the other pointing to an NGINX subsite) on Ubuntu 18.04

If you have not read my previous posts I have now moved my blog to the awesome UpCloud host (signup using this link to get $25 free credit). I compared Digital Ocean, Vultr and UpCloud Disk IO here and UpCloud came out on top by a long way (read the blog post here). Here is my blog post on moving from Vultr to UpCloud.

UpCloud performance is great.

Upcloud Site Speed in GTMetrix

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Goal(s)

Setup 2x subdomains on https://fearby.com

– Sub Domain #1: https://test.fearby.com (pointing to a dedicated UpCloud VM in Singapore for testing).

– Sub Domain #2: https://audit.fearby.com (pointing to a sub-website on the NGINX/VM that runs https://fearby.com )

Let’s set up the first Sub Domain (dedicated VM) and SSL

Backup

Do back up your server first.

VM

I created a second server ($5 month or $0.07c hour 1,024MB Memory, 25GB Disk, 1024 GB Month Data Transfer) at UpCloud. If you don’t already have an account at UpCloud use this link to signup and get $25 free credit ( https://www.upcloud.com/register/?promo=D84793 ). Read my blog post on why UpCloud is awesome and how I moved my domain to UpCloud.

Once I spun up a second server I obtained the IPv4 and IPv6 IP addresses of the new “test” VM from the UpCloud dashboard.

IPV4 IP: 94.237.65.54
IPV6 IP: 2a04:3543:1000:2310:24b7:7cff:fe92:468c

DNS

These DNS records were already in place with my DNS provider (Cloudflare).

A fearby.com 209.50.48.88
AAAA fearby.com 2605:7380:1000:1310:24b7:7cff:fe92:0d64

I added these DNS records for the subdomains.

I added a new A NAME record for the new shared NGINX subdomain (for https://audit.fearby.com), this subdomain will be a sub-website that is running off the same server as https://fearby.com

A audit 209.50.48.88
AAAA audit 2605:7380:1000:1310:24b7:7cff:fe92:0d64

I added another set of records for the new dedicated VM  subdomain (for https://test.fearby.com)

A test 94.237.65.54
AAAA test 2a04:3543:1000:2310:24b7:7cff:fe92:468c

I waited for DNS to replicate around the globe by watching https://www.whatsmydns.net/

Setup a Firewall

On the new dedicated https://test.fearby.com VM, I installed the ufw firewall.

sudo apt-get install ufw

I configured the firewall to allow minimum ports (and added whitelisted IP for port 22 and added UpCloud DNS servers). I will lock this down some more later.

TIP: If your ISP does not offer a dedicated IP try a VPN. I use https://cyberghostvpn.com on OSX and Android.

Firewall rules.

sudo ufw status numbered

     To                         Action      From
     --                         ------      ----
[ 1] 22                         ALLOW IN    x.x.x.x
[ 2] 80                         ALLOW IN    Anywhere
[ 3] 443                        ALLOW IN    Anywhere
[ 4] 53                         ALLOW IN    93.237.127.9
[ 5] 53                         ALLOW IN    93.237.40.9
[ 6] 25                         DENY IN     Anywhere
[ 7] 80 (v6)                    ALLOW IN    Anywhere (v6)
[ 8] 443 (v6)                   ALLOW IN    Anywhere (v6)
[ 9] 53                         ALLOW IN    2a04:3540:53::1
[10] 53                         ALLOW IN    2a04:3544:53::1
[11] 22                         ALLOW IN    x.x.x.x.x.x.x.x.x
[12] 25 (v6)                    DENY IN     Anywhere (v6)

I enabled the firewall.

sudo ufw enable

Install NGINX (on https://test.fearby.com)

On the new dedicated https://test.fearby.com VM I…

Created a new www root

mkdir /www-root

Set permissions

sudo chown -R www-data:www-data /www-root

Installed NGINX

sudo apt-get update
sudo apt-get install nginx

I created a placeholder webpage

sudo nano /www-root/index.html

Configured the root value in /etc/nginx/sites-available/default

Created a symbolic link of the nginx config

sudo ln -s /etc/nginx/sites-available/default /etc/nginx/sites-enabled/default

Lets Encrypt SSL

I have previously setup Lets encrypt on Ubuntu 16.04 but not 18.04. Certbot had info on setting up Lets Encrypt for 14.x 16.x and 17.x but not 18.x

Full credit for the SSL steps goes to @Linuxize ( tips on setting up Lets Encrypt on Ubuntu 18.04 ). Check out https://linuxize.com/

I installed Lets Encrypt certbot

sudo apt update
sudo apt install certbot

I created a new Diffie–Hellman key

mkdir -p /etc/ssl/certs/
sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

Map requests to http://test.fearby.com/.well-known/acme-challenge to /var/lib/letsencrypt/.well-known ( Read the linuxize post for detailed steps ).

mkdir -p /var/lib/letsencrypt/.well-known
chgrp www-data /var/lib/letsencrypt
chmod g+s /var/lib/letsencrypt

Create a /etc/nginx/snippets/letsencrypt.conf on http://test.fearby.com and enforce the redirect.

location ^~ /.well-known/acme-challenge/ {
  allow all;
  root /var/lib/letsencrypt/;
  default_type "text/plain";
  try_files $uri =404;
}

Create a /etc/nginx/snippets/ssl.conf file on http://test.fearby.com

ssl_dhparam /etc/ssl/certs/dhparam.pem;

ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;

ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3-SHA:!DSS';
ssl_prefer_server_ciphers on;

ssl_stapling on;
ssl_stapling_verify on;
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 30s;

add_header Strict-Transport-Security "max-age=15768000; includeSubdomains; preload";
add_header X-Frame-Options SAMEORIGIN;
add_header X-Content-Type-Options nosniff;

Let’s get a certificate

sudo certbot certonly --agree-tos --email [email protected] --webroot -w /var/lib/letsencrypt/ -d test.fearby.com

Certificates have been created 🙂

ls -al /etc/letsencrypt/live/test.fearby.com/
total 12
drwxr-xr-x 2 user user 4096 Jun 26 11:30 .
drwx------ 3 user user 4096 Jun 26 11:30 ..
-rw-r--r-- 1 user user  543 Jun 26 11:30 README
lrwxrwxrwx 1 user user   39 Jun 26 11:30 cert.pem -> ../../archive/test.fearby.com/cert1.pem
lrwxrwxrwx 1 user user   40 Jun 26 11:30 chain.pem -> ../../archive/test.fearby.com/chain1.pem
lrwxrwxrwx 1 user user   44 Jun 26 11:30 fullchain.pem -> ../../archive/test.fearby.com/fullchain1.pem
lrwxrwxrwx 1 user user   42 Jun 26 11:30 privkey.pem -> ../../archive/test.fearby.com/privkey1.pem

Now lets edit “/etc/nginx/sites-available/default” on https://test.fearby.com VM and add the cert paths.

server {
        listen 80 default_server;
        listen [::]:80 default_server;

        listen 443 ssl http2;
        listen [::]:443 ssl http2;

        if ($scheme != "https") {
                return 301 https://$host$request_uri;
        }

        ssl_certificate /etc/letsencrypt/live/test.fearby.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/test.fearby.com/privkey.pem;
        ssl_trusted_certificate /etc/letsencrypt/live/test.fearby.com/chain.pem;

        include snippets/ssl.conf;

        #ssl_stapling on; # Requires nginx >= 1.3.7
        # add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";

        add_header X-Frame-Options DENY;
        add_header X-Content-Type-Options nosniff;
        add_header X-XSS-Protection "1; mode=block";

        root /www-root/;

        include snippets/letsencrypt.conf;

        index index.html;

        server_name test.fearby.com;

        location / {
                try_files $uri $uri/ =404;
        }
}

Reload NGINX

sudo systemctl reload nginx

or

sudo nginx -t
sudo nginx -s reload
sudo systemctl reload nginx

Now let’s setup the second subdomain (subsite off https://fearby.com) and SSL

VM

I already have NGINX on https://fearby.com set up a second site.

DNS

We have already set up a DNS record for https://audit.fearby.com (above)

Firewall

Already configured at https://fearby.com

SSL

Because I had an existing Comodo certificate on https://fearby.com I am going to repeat the steps above to generate a new certificate but save the NGINX config to /etc/nginx/sites-available/audit.fearby.com (this activates the second site)

TIP: Follow the Linuxize guide here (for creating ssl.conf, letsencrypt.conf etc config files), Do a backup and restore if need be.

I created a new Diffie–Hellman key

mkdir -p /etc/ssl/certs/
sudo openssl dhparam -out /etc/ssl/certs/dhparam.pem 2048

Let’s get a certificate

sudo certbot certonly --agree-tos --email [email protected] --webroot -w /var/lib/letsencrypt/ -d audit.fearby.com

Configure NGINX

Map requests to http://audit.fearby.com/.well-known/acme-challenge to /var/lib/letsencrypt/.well-known ( Read the linuxize post for detailed steps ).

mkdir -p /var/lib/letsencrypt/.well-known
chgrp www-data /var/lib/letsencrypt
chmod g+s /var/lib/letsencrypt

I created a new NGINX site ( /etc/nginx/sites-available/audit.fearby.com )

#proxy_cache_path /tmp/nginx-cache keys_zone=one:10m;#
server {
        root /www-audit-root;

        # Listen Ports
        listen 80;
        listen [::]:80;
        listen 443 ssl http2;
        listen [::]:443 ssl http2;

        # Default File
        index index.html index.php index.htm;

        # Server Name
        server_name audit.fearby.com;

        include snippets/letsencrypt.conf;

        location / {
                try_files $uri $uri/ =404;
        }

        ssl_certificate /etc/letsencrypt/live/audit.fearby.com/fullchain.pem;
        ssl_certificate_key /etc/letsencrypt/live/audit.fearby.com/privkey.pem;
        ssl_trusted_certificate /etc/letsencrypt/live/audit.fearby.com/chain.pem;

        ssl_dhparam /etc/ssl/certs/auditdhparam.pem;

        ssl_session_timeout 1d;
        #ssl_session_cache shared:SSL:50m;
        ssl_session_tickets off;

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA38$

        ssl_prefer_server_ciphers on;

        ssl_stapling on;
        ssl_stapling_verify on;

        #resolver 8.8.8.8 8.8.4.4 valid=300s;
        #resolver_timeout 30s;

        add_header Strict-Transport-Security "max-age=15768000; includeSubdomains; preload";
        add_header X-Frame-Options SAMEORIGIN;
        add_header X-Content-Type-Options nosniff;

        if ($scheme != "https") {
                return 301 https://$host$request_uri;
        }
}

I created a symbolic link of the config file

sudo ln -s /etc/nginx/sites-available/audit.fearby.com /etc/nginx/sites-enabled/audit.fearby.com

Reload NGINX

sudo systemctl reload nginx

or

sudo nginx -t
sudo nginx -s reload
sudo systemctl reload nginx

How to test the certificate renewal

sudo certbot renew --dry-run

Automate the renewal in crontab (every 12 hours)

I set this crontab entry up on https://fearby.com and https://test.fearby.com

crontab -e
0 */12 * * * root test -x /usr/bin/certbot -a \! -d /run/systemd/system && perl -e 'sleep int(rand(3600))' && certbot -q renew --renew-hook "systemctl reload nginx"

Conclusion

Yes, I haVe 2 subdomains (1x dedicated VM and the other is a sub-website off an existing server) with SSL certificates.

Ping Results

ping -c 4 fearby.com
PING fearby.com (209.50.48.88): 56 data bytes
64 bytes from 209.50.48.88: icmp_seq=0 ttl=44 time=220.000 ms
64 bytes from 209.50.48.88: icmp_seq=1 ttl=44 time=290.602 ms
64 bytes from 209.50.48.88: icmp_seq=2 ttl=44 time=311.938 ms
64 bytes from 209.50.48.88: icmp_seq=3 ttl=44 time=330.841 ms

--- fearby.com ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 220.000/288.345/330.841/41.948 ms

ping -c 4 test.fearby.com
PING test.fearby.com (94.237.65.54): 56 data bytes
64 bytes from 94.237.65.54: icmp_seq=0 ttl=44 time=333.590 ms
64 bytes from 94.237.65.54: icmp_seq=1 ttl=44 time=252.433 ms
64 bytes from 94.237.65.54: icmp_seq=2 ttl=44 time=271.153 ms
64 bytes from 94.237.65.54: icmp_seq=3 ttl=44 time=292.685 ms

--- test.fearby.com ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 252.433/287.465/333.590/30.200 ms

ping -c 4 audit.fearby.com
PING audit.fearby.com (209.50.48.88): 56 data bytes
64 bytes from 209.50.48.88: icmp_seq=0 ttl=44 time=281.662 ms
64 bytes from 209.50.48.88: icmp_seq=1 ttl=44 time=307.676 ms
64 bytes from 209.50.48.88: icmp_seq=2 ttl=44 time=227.985 ms
64 bytes from 209.50.48.88: icmp_seq=3 ttl=44 time=215.566 ms

--- audit.fearby.com ping statistics ---
4 packets transmitted, 4 packets received, 0.0% packet loss
round-trip min/avg/max/stddev = 215.566/258.222/307.676/37.845 ms

Webpage Results

Screenshow showing the main site and 2 subdomains in a web browser

Troubleshooting

If you are having troubles generating the initial certificate check that you have not blocked port 80 and don’t have “Strict-Transport-Security” heavers enabled.

sudo certbot certonly --agree-tos --email [email protected] --webroot -w /var/lib/letsencrypt/ -d g
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Obtaining a new certificate
Performing the following challenges:
http-01 challenge for yoursubdomain.domain.com
Using the webroot path /var/lib/letsencrypt for all unmatched domains.
Waiting for verification...
Cleaning up challenges
Failed authorization procedure. yoursubdomain.domain.com (http-01): urn:acme:error:unauthorized :: The client lacks sufficzLlmg_w6Tc: q%!(EXTRA string=<html>
<head><title>404 Not Found</title></head>
<body bgcolor="white">
<center><h1>404 Not Found</h1></center>
<hr><center>)

IMPORTANT NOTES:
 - The following errors were reported by the server:

   Domain: yoursubdomain.domain.com
   Type:   unauthorized
   Detail: Invalid response from
   http://yoursubdomain.domain.com/.well-known/acme-challenge/_QA3jblEydx5mE8I8OdRsd2EdHIj4R-przLlmg_w6Tc:
   q%!(EXTRA string=<html>
   <head><title>404 Not Found</title></head>
   <body bgcolor="white">
   <center><h1>404 Not Found</h1></center>
   <hr><center>)

   To fix these errors, please make sure that your domain name was
   entered correctly and the DNS A record(s) for that domain
   contain(s) the right IP address.

I re-ran the certbot command but pointed to the real /www-root (not/var/lib/letsencrypt/)

Create a new

mkdir /www-root/.well-known/
mkdir /www-root/.well-known/acme-challenge/
sudo certbot certonly --agree-tos --email [email protected] --webroot -w /www-root -d yoursubdomain.domain.com

I hope this guide helps someone.

Please consider using my referral code and get $25 credit for free.

https://www.upcloud.com/register/?promo=D84793

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

v1.1 Troubleshooting

v1.0 Initial Post

Filed Under: Linux, NGINX, ssl, Subdomain, Ubuntu, UpCloud, VM, Website Tagged With: a, Adding, an, and, domains, new, nginx, on, one, other, pointing, sub, subsite, the, to, two, Ubuntu 18.04, UpCloud, vm

How to use the UpCloud API to manage your UpCloud servers

June 17, 2018 by Simon

How to use the UpCloud API to manage your UpCloud servers.

If you have not read my previous posts I have now moved my blog etc to the awesome UpCloud host. Sign up using this link to get $25 free credit.

I recently compared Digital Ocean, Vultr and UpCloud Disk IO here and UpCloud came out on top by a long way (read the blog post here).

Here is my blog post on moving from Vultr to UpCloud.

Spoiler: UpCloud performance is great.

Upcloud Site Speed in GTMetrix

I have never had an UpCloud page load take longer than 2 seconds since moving.

UpCloud API

UpCloud has an API that we can opt into to using where we can manage servers. Read the official UpCloud API documentation here.

The API allows you to control:

  • Accounts
  • Pricing
  • Zones
  • Timezones
  • Plans
  • Servers
  • Storages
  • IP-Addresses
  • Firewall
  • Tags
  • etc

Create a sub-account to query the API

You should create a new user account (in the UpCloud dashboard) just for API access. I created two accounts for use on my server and on my home laptop and my server (and set a limiting IP(s) that can access it).

Create a Sub Account for API Access

Login to your UpCloud account (create an account here and get $25 free credit),

  1. Click My Accounts,
  2. Click User Accounts,
  3. Click Change on your user and enable API connections.
  4. TIP: Set up an IP rule to limit access to your API for security (I set up a VPN to get a static IP on my dynamic IP Internet host at home)).
  5. Save the changes

Enable API Connections

TIP: Lockdown the account to have the minimum permissions required.

e.g

  • Disable access to the control panel (Untick).
  • Allow API Connections (Tick) and specify an IP
  • Disable access to billing contact (Untick).
  • Disable access to billing section in the control panel (Untick).
  • Disable allowing of emails to billing contact (Untick).
  • Allow or Remove access to all server (or manually add access to desired servers)
  • Allow or Remove access to modify storage (or manually allow or remove access to desired storage)
  • etc

Lock down the account to the minimum needed

Save the account.

Now let’s make our first API call

I use OSX and I use the awesome Paw API testing tool from https://paw.cloud (This is not a plug, they are awesome). Postman is a popular API testing tool too. Any good programing language or CLI will allow you to send API requests.

First, let’s prepare the authorization string (this is a Base64 encoded combination of your username and password) read more here.

  1. Head over to https://www.base64encode.org/
  2. Click the Encode tab
  3. Add your “username:password” (without the quotes).
  4. Click Encode

A Base64 string will be outputted 🙂

e.g > eW91cmFwaXVzZXJuYW1lOnlvdXJzdXBlcnNlY3VyZXBhc3N3b3Jk

fyi

You can encode also Encode and Decode Base64 from the Ubuntu Command line

Encode Base64 from the CLI Sample

echo -n 'yourapiusername:yoursupersecurepassword' | base64
eW91cmFwaXVzZXJuYW1lOnlvdXJzdXBlcnNlY3VyZXBhc3N3b3Jk

Decode Base64 from the CLI Sample

echo `echo eW91cmFwaXVzZXJuYW1lOnlvdXJzdXBlcnNlY3VyZXBhc3N3b3Jk | base64 --decode`
yourapiusername:yoursupersecurepassword

Now we can add an “Authorization Basic” token to the API request in Paw.

Authorization Header added with my base64 token.

A quick test of the UpCloud Prices API endpoint https://api.upcloud.com/1.2/price reveals the API is working.

Add Authorization Token

I can now see a full breakdown of my service prices in JSON 🙂

Query My Account

OK, Let’s see how much credit I have left by querying the https://api.upcloud.com/1.2/account, I duplicated the item in Paw and changed the URL to https://api.upcloud.com/1.2/account but no data returned?

I had to enable “Access to Billing section in Control Panel” for the user before this data returned from the API (make sense).

> HTTP/1.1 200 OK

Query (GET)

GET /1.2/account HTTP/1.1
Host: api.upcloud.com
User-Agent: Paw/3.1.7 (Macintosh; OS X/10.13.5) NSURLConnection/1452.23
Authorization: Basic *******************************************

Output

HTTP/1.1 200 OK
Date: Sun, 17 Jun 2018 04:23:32 GMT
Content-Type: application/json; charset=UTF-8
Connection: close
Content-Length: 91
Server: Apache

{
   "account" : {
      "credits" : 2500.00,
      "username" : "yourapiusername"
   }
}

“2500.00” = cents ($25)

Query All of Your Servers

Ok, Let’s get server information by querying https://api.upcloud.com/1.2/server

Query (GET)

GET /1.2/server HTTP/1.1
Host: api.upcloud.com
User-Agent: Paw/3.1.7 (Macintosh; OS X/10.13.5) NSURLConnection/1452.23
Authorization: Basic ##############base64hash##############

Output

HTTP/1.1 200 OK
Date: Sun, 17 Jun 2018 04:32:22 GMT
Content-Type: application/json; charset=UTF-8
Connection: close
Content-Length: 1154
Server: Apache

{
   "servers" : {
      "server" : [
         {
            "core_number" : "1",
            "hostname" : "server1nameredacted.com",
            "license" : 0,
            "memory_amount" : "2048",
            "plan" : "1xCPU-2GB",
            "plan_ipv4_bytes" : "3472464313",
            "plan_ipv6_bytes" : "166293599",
            "state" : "started",
            "tags" : {
               "tag" : [
                  "tag1"
               ]
            },
            "title" : "server1nameredacted.com",
            "uuid" : "########-####-####-####-############",
            "zone" : "us-chi1"
         },
         {
            "core_number" : "1",
            "hostname" : "server2nameredacted.com",
            "license" : 0,
            "memory_amount" : "1024",
            "plan" : "1xCPU-1GB",
            "plan_ipv4_bytes" : "198911",
            "plan_ipv6_bytes" : "19742",
            "state" : "started",
            "tags" : {
               "tag" : [
                  "tag2"
               ]
            },
            "title" : "server1nameredacted.com",
            "uuid" : "########-####-####-####-############",
            "zone" : "us-chi1"
         }
      ]
   }
}

Query Server Information

I have redated the UUID’s for my servers but once you know them you can query them by hitting https://api.upcloud.com/1.2/server/########-####-####-####-############

Query (GET)

GET /1.2/server/########-####-####-####-############ HTTP/1.1
Host: api.upcloud.com
User-Agent: Paw/3.1.7 (Macintosh; OS X/10.13.5) NSURLConnection/1452.23
Authorization: Basic ##############base64hash##############

Output

HTTP/1.1 200 OK
Date: Sun, 17 Jun 2018 04:45:14 GMT
Content-Type: application/json; charset=UTF-8
Connection: close
Content-Length: 1656
Server: Apache

{
   "server" : {
      "boot_order" : "cdrom,disk",
      "core_number" : "1",
      "firewall" : "on",
      "host" : redacted,
      "hostname" : "server1nameredacted.com",
      "ip_addresses" : {
         "ip_address" : [
            {
               "access" : "private",
               "address" : "##.#.#.###",
               "family" : "IPv4"
            },
            {
               "access" : "public",
               "address" : "###.###.###.###",
               "family" : "IPv4",
               "part_of_plan" : "yes"
            },
            {
               "access" : "public",
               "address" : "####:####:####:####:####:####:########",
               "family" : "IPv6"
            }
         ]
      },
      "license" : 0,
      "memory_amount" : "2048",
      "nic_model" : "virtio",
      "plan" : "1xCPU-2GB",
      "plan_ipv4_bytes" : "3519033266",
      "plan_ipv6_bytes" : "168200052",
      "state" : "started",
      "storage_devices" : {
         "storage_device" : [
            {
               "address" : "virtio:0",
               "boot_disk" : "0",
               "part_of_plan" : "yes",
               "storage" : "########-####-####-####-############",
               "storage_size" : 50,
               "storage_title" : "system",
               "type" : "disk"
            }
         ]
      },
      "tags" : {
         "tag" : [
            "fearby"
         ]
      },
      "timezone" : "Australia/Sydney",
      "title" : "server1nameredacted.com",
      "uuid" : "########-####-####-####-############",
      "video_model" : "cirrus",
      "vnc" : "off",
      "vnc_password" : "#########################",
      "zone" : "us-chi1"
   }
}

The servers name, IPv4 and IPV6 network adapters are listed, CPU(s), Memory, Disk Sized and UUID’s are all visible 🙂

Surprisingly the VNC password is visible (enabling access to the root console).

TIP: Ensure your API account is safe and secure.

Query Storage Information

Now, Let’s query the storage with the GUID from above by querying https://api.upcloud.com/1.2/storage/########-####-####-####-############

Query (GET)

GET /1.2/storage/########-####-####-####-############ HTTP/1.1
Host: api.upcloud.com
User-Agent: Paw/3.1.7 (Macintosh; OS X/10.13.5) NSURLConnection/1452.23
Authorization: Basic  ##############base64hash##############

Output

HTTP/1.1 200 OK
Date: Sun, 17 Jun 2018 04:53:36 GMT
Content-Type: application/json; charset=UTF-8
Connection: close
Content-Length: 559
Server: Apache

{
   "storage" : {
      "access" : "private",
      "backup_rule" : {},
      "backups" : {
         "backup" : [
            "########-####-####-####-############"
         ]
      },
      "license" : 0,
      "part_of_plan" : "yes",
      "servers" : {
         "server" : [
            "########-####-####-####-############"
         ]
      },
      "size" : 50,
      "state" : "online",
      "tier" : "maxiops",
      "title" : "system",
      "type" : "normal",
      "uuid" : "########-####-####-####-############",
      "zone" : "us-chi1"
   }
}

I can see information about the storage’s assigned server and backups 🙂

Query Backup Information

Backup storage can be queried with the same storge API endpoint https://api.upcloud.com/1.2/storage/########-####-####-####-############

Query (GET)

GET /1.2/storage/014fd483-ea90-4055-b445-bf2011951999 HTTP/1.1
Host: api.upcloud.com
User-Agent: Paw/3.1.7 (Macintosh; OS X/10.13.5) NSURLConnection/1452.23
Authorization: Basic ##############base64hash##############

Output

HTTP/1.1 200 OK
Date: Sun, 17 Jun 2018 05:01:11 GMT
Content-Type: application/json; charset=UTF-8
Connection: close
Content-Length: 412
Server: Apache

{
   "storage" : {
      "access" : "private",
      "created" : "2018-06-16T04:47:56Z",
      "license" : 0,
      "origin" : "########-####-####-####-############",
      "servers" : {
         "server" : []
      },
      "size" : 50,
      "state" : "online",
      "title" : "On-Demand Backup",
      "type" : "backup",
      "uuid" : "########-####-####-####-############",
      "zone" : "us-chi1"
   }
}

Rename Backup

One thing that I would like to be able to do is to rename on-demand backups in the UpCloud dashboard (this is not a feature yet) but I can rename manual backup’s in the API though 🙂

Boring “On-Demand Backup” label.

Rename Backups Not possible in the GUI

I tried sending JSON to https://api.upcloud.com/1.2/storage/########-####-####-####-############ to rename a backup but kept getting an error?

JSON

{
> “storage”: {
> “title”: “Latest manual backup , Working NGINX, PHP, MySQL w Tweaks”,
> “size”: “50”
> }
> }

Result

> “error_code” : “CONTENT_TYPE_INVALID”,
> “error_message” : “The Content-Type header has an invalid value.”

I googled and found an old manual for UpClouds API (official support here).

I added these missing content-type headers (108 was the length in chars of the payload)

> Content-Type: application/json; Charset=UTF-8'
> Content-Length: 108

Still no go?

I think the content-length value is wrong, more here.

I fixed it, it turned out I had a semicolon in the Content-Type value. The JSON RFC always assumes that Content-Type is UTF8 encoded (more here).

This Fails

Content-Type: application/json; charset=utf-8

This Works

Content-Type: application/json

Now I can rename my Backup (storage). I manually calculated the length of the JSON payload and added a “Content-Length” header and value.

Query (PUT)

PUT /1.2/storage/########-####-####-####-############ HTTP/1.1
Host: api.upcloud.com
User-Agent: Paw/3.1.7 (Macintosh; OS X/10.13.5) NSURLConnection/1452.23
Content-Type: application/json
Content-Length: 113
Authorization: Basic ##############base64hash##############

{"storage":{"size":"50","title":"Latest manual backup , Working NGINX, PHP, MySQL w Tweaks"}}

Output

HTTP/1.1 202 ACCEPTED
Date: Sun, 17 Jun 2018 05:47:02 GMT
Content-Type: application/json; charset=UTF-8
Connection: close
Content-Length: 453
Server: Apache

{
   "storage" : {
      "access" : "private",
      "created" : "2018-06-16T04:47:56Z",
      "license" : 0,
      "origin" : "########-####-####-####-############",
      "servers" : {
         "server" : []
      },
      "size" : 50,
      "state" : "online",
      "title" : "Latest manual backup , Working NGINX, PHP, MySQL w Tweaks",
      "type" : "backup",
      "uuid" : "########-####-####-####-############",
      "zone" : "us-chi1"
   }
}

Success 🙂

Backup Renamed

Create a Backup

Backups can be performed with a “/backup” added to the end of the query string.

Query (POST)

POST /1.2/storage/########-####-####-####-############/backup HTTP/1.1
Host: api.upcloud.com
User-Agent: Paw/3.1.7 (Macintosh; OS X/10.13.5) NSURLConnection/1452.23
Content-Type: application/json
Content-Length: 100
Authorization: Basic ##############base64hash##############

{
  "storage": {
    "title": "Sunday 17th Latest backup , Working NGINX, PHP, MySQL w Tweaks"
  }
}

Output

HTTP/1.1 201 CREATED
Date: Sun, 17 Jun 2018 06:17:35 GMT
Content-Type: application/json; charset=UTF-8
Connection: close
Content-Length: 487
Server: Apache

{
   "storage" : {
      "access" : "private",
      "created" : "2018-06-17T06:17:35Z",
      "license" : 0,
      "origin" : "########-####-####-####-############",
      "progress" : "0",
      "servers" : {
         "server" : []
      },
      "size" : 50,
      "state" : "maintenance",
      "title" : "Sunday 17th Latest backup , Working NGINX, PHP, MySQL w Tweaks",
      "type" : "backup",
      "uuid" : "########-####-####-####-############",
      "zone" : "us-chi1"
   }
}

Success (UpCloud GUI)

Conclusion

UpCloud does have great API docs.

I can easily integrate this into bash scripts to manage my servers via API and a future Java app for managing servers.

Paw does give CURL output to allow me to copy working API’s for use in BASH 🙂

More to come

  1. BASH Script to Deploy and configure a server on UpCloud via Initialization scripts (or manual) (1 week)
  2. JAVA App to manage your server (3 months)

If you are signing up for UpCloud please consider using my referral code and get $25 credit for free.

Read my setup guide here.

https://www.upcloud.com/register/?promo=D84793

I hope this guide helps someone.

Ask a question or recommend an article

[contact-form-7 id=”30″ title=”Ask a Question”]

Revision History

V1.1 updated typo

v1.0 Initial Post.

Filed Under: API, Backup, Cloud, Linux, Networking, Restore, UpCloud, VM Tagged With: api, How, Manage, servers, the, to, UpCloud, use, your

Measuring VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 3 of 4

June 5, 2018 by Simon

How can you measure VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 3 of 4

Read Part 1, Part 2, Part 3 or Part 4

I used these commands to generate bonnie++ reports from the data in part 2

echo "<h1>Bonnie Results</h1>" > /www-data/bonnie.html
echo "<h2>Vultr (Sydney)</h2>" >> /www-data/bonnie.html
echo "1.97,1.97,servername,1,1528177870,4G,,656,99,308954,68,113706,33,1200,92,188671,30,10237,251,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,26067us,119ms,179ms,29139us,26069us,16118us,1463us,703us,880us,263us,119us,593us" | bon_csv2html >> /www-data/bonnie.html
echo "<h2>Digital Ocean (London)</h2>" >> /www-data/bonnie.html
echo "1.97,1.97,servername,1,1528186398,4G,,699,99,778636,74,610414,60,1556,99,1405337,59,+++++,+++,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,17678us,10099us,17014us,7027us,3067us,2366us,1243us,376us,611us,108us,59us,181us" | bon_csv2html >> /www-data/bonnie.html
echo "<h2>UpCloud (Singapore)</h2>" >> /www-data/bonnie.html
echo "1.97,1.97,servername,1,1528226703,4G,,1014,99,407179,24,366622,32,2137,99,451886,17,+++++,+++,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,11297us,54232us,16443us,4949us,44883us,1595us,264us,340us,561us,138us,66us,327us" | bon_csv2html >> /www-data/bonnie.html

Image of results here

Bonnie Results

Network Performace

IMHO Network Latency is the biggest impact on server performance, Read my old post on scalability on a budget here. I am in Australia an having a server in Singapore was too far away and latency was terrible.

Here is a non-scientific example of pinging a Vultr, Digital Ocean and UpCloud server in three different locations (and Google).

Ping Test

Test Ping Results

  • Vultr 132ms Ping Average (Sydney)
  • Digital Ocean 322ms Ping Average (London)
  • UpCloud 180ms Ping Average (Singapore)

Latency matters, run a https://www.webpagetest.org/ scan over your site to see why.

Adding https added almost 0.7 seconds to https communications in the past on Digital Ocean (a few thousand kilometres away). The longer the latency the longer HTTPS handshakes take.

SSL

Deploying a server to Singapore (in my experience) is bad if your visitors are in Australia. But deploying to other regions may be lower in cost though. It’s a trade-off.

Server Location

Deploy servers as close as you can to your customers is the best tip for performance.

Deploy serves close to your customers

Also, consider setting up Image Optimization and Image CDN plugins (guide here) in WordPress and using Cloudflare (guide here)

Benchmarking with SysBench

Install CPU Benchmark

sudo apt-get install sysbench

CPU Benchmark (Vultr/Sydney)

Result

sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Doing CPU performance benchmark

Threads started!
Done.

Maximum prime number checked in CPU test: 20000


Test execution summary:
    total time:                          39.1700s
    total number of events:              10000
    total time taken by event execution: 39.1586
    per-request statistics:
         min:                                  2.90ms
         avg:                                  3.92ms
         max:                                 20.44ms
         approx.  95 percentile:               7.43ms

Threads fairness:
    events (avg/stddev):           10000.0000/0.00
    execution time (avg/stddev):   39.1586/0.00

39.15 seconds

CPU Benchmark (Digital Ocean/London)

sysbench --test=cpu --cpu-max-prime=20000 run

Result

sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Doing CPU performance benchmark

Threads started!
Done.

Maximum prime number checked in CPU test: 20000


Test execution summary:
    total time:                          33.4382s
    total number of events:              10000
    total time taken by event execution: 33.4352
    per-request statistics:
         min:                                  3.24ms
         avg:                                  3.34ms
         max:                                  6.45ms
         approx.  95 percentile:               3.45ms

Threads fairness:
    events (avg/stddev):           10000.0000/0.00
    execution time (avg/stddev):   33.4352/0.00

33.43 sec

CPU Benchmark (UpCloud/Singapore)

sysbench --test=cpu --cpu-max-prime=20000 run

Result

sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1



Doing CPU performance benchmark

Threads started!
Done.

Maximum prime number checked in CPU test: 20000


Test execution summary:
    total time:                          23.7809s
    total number of events:              10000
    total time taken by event execution: 23.7780
    per-request statistics:
         min:                                  2.35ms
         avg:                                  2.38ms
         max:                                  6.92ms
         approx.  95 percentile:               2.46ms

Threads fairness:
    events (avg/stddev):           10000.0000/0.00
    execution time (avg/stddev):   23.7780/0.00

23.77 sec

Surprisingly, 1st place in prime generation goes to UpCloud, then Digital Ocean then Vultr.  UpCloud has some good processors.

Processors:

  • UpCLoud (Singapore): Intel(R) Xeon(R) CPU E5-2687W v4 @ 3.00GHz
  • Digital Ocean (London): Intel(R) Xeon(R) CPU E5-2630L v2 @ 2.40GHz
  • Vultr (Sydney): Virtual CPU a7769a6388d5 (Masked/Hidden CPU @ 2.40GHz)

(Lower is better)

prime benchmark results

(oops, typo in the chart should say Vultr)

Benchmark the file IO

Confirm free space

df -h /

Install Sysbench

sudo apt-get install sysbench

I had 10GB free on all servers (Vultr, Digitial Ocean and UpCloud) so I created a 10GB test file.

sysbench --test=fileio --file-total-size=10G prepare
sysbench 0.4.12:  multi-threaded system evaluation benchmark

128 files, 81920Kb each, 10240Mb total
Creating files for the test...

Now I can run the benchmark and use the pre-created text file.

sysbench --test=fileio --file-total-size=10G --file-test-mode=rndrw --init-rng=on --max-time=300 --max-requests=0 run

SysBench description from the Ubuntu manpage.

“SysBench is a modular, cross-platform and multi-threaded benchmark tool for evaluating OS parameters that are important for a system running a database under intensive load. The idea of this benchmark suite is to quickly get an impression about system performance without setting up complex database benchmarks or even without installing a database at all.”

SysBench Results (Vultr/Sydney)

sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1
Initializing random number generator from timer.


Extra file open flags: 0
128 files, 80Mb each
10Gb total file size
Block size 16Kb
Number of random requests for random IO: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!
Time limit exceeded, exiting...
Done.

Operations performed:  385920 Read, 257280 Write, 823266 Other = 1466466 Total
Read 5.8887Gb  Written 3.9258Gb  Total transferred 9.8145Gb  (33.5Mb/sec)
 2143.98 Requests/sec executed

Test execution summary:
    total time:                          300.0026s
    total number of events:              643200
    total time taken by event execution: 182.4249
    per-request statistics:
         min:                                  0.01ms
         avg:                                  0.28ms
         max:                                 18.12ms
         approx.  95 percentile:               0.55ms

Threads fairness:
    events (avg/stddev):           643200.0000/0.00
    execution time (avg/stddev):   182.4249/0.00

SysBench Results (Digital Ocean/London)

sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1
Initializing random number generator from timer.


Extra file open flags: 0
128 files, 80Mb each
10Gb total file size
Block size 16Kb
Number of random requests for random IO: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!
Time limit exceeded, exiting...
Done.

Operations performed:  944280 Read, 629520 Write, 2014432 Other = 3588232 Total
Read 14.409Gb  Written 9.6057Gb  Total transferred 24.014Gb  (81.968Mb/sec)
 5245.96 Requests/sec executed

Test execution summary:
    total time:                          300.0024s
    total number of events:              1573800
    total time taken by event execution: 160.5558
    per-request statistics:
         min:                                  0.00ms
         avg:                                  0.10ms
         max:                                 18.62ms
         approx.  95 percentile:               0.34ms

Threads fairness:
    events (avg/stddev):           1573800.0000/0.00
    execution time (avg/stddev):   160.5558/0.00

SysBench Results (UpCloud/Singapore)

sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1
Initializing random number generator from timer.


Extra file open flags: 0
128 files, 80Mb each
10Gb total file size
Block size 16Kb
Number of random requests for random IO: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!
Time limit exceeded, exiting...
Done.

Operations performed:  994320 Read, 662880 Write, 2121090 Other = 3778290 Total
Read 15.172Gb  Written 10.115Gb  Total transferred 25.287Gb  (86.312Mb/sec)
 5523.97 Requests/sec executed

Test execution summary:
    total time:                          300.0016s
    total number of events:              1657200
    total time taken by event execution: 107.4434
    per-request statistics:
         min:                                  0.00ms
         avg:                                  0.06ms
         max:                                 15.43ms
         approx.  95 percentile:               0.13ms

Threads fairness:
    events (avg/stddev):           1657200.0000/0.00
    execution time (avg/stddev):   107.4434/0.00

Comparison

Sysbench Results table

sysbench fileio results (text)

Read

  • Vultr (Sydney): 385,920
  • Digital Ocean (London): 944,280
  • UpCloud (Singapore): 944,320

Write

  • Vultr (Sydney): 823,266
  • Digital Ocean (London): 629,520
  • UpCloud (Singapore): 662,880

Other

  • Vultr (Sydney): 1,466,466
  • Digital Ocean (London): 3,588,232
  • UpCloud (Singapore): 2,121,090

Total Read Gb

  • Vultr (Sydney): 5.8887 Gb
  • Digital Ocean (London): 14.409 Gb
  • UpCloud (Singapore): 15.172 Gb

Total Written Gb

  • Vultr (Sydney): 3.9258 Gb
  • Digital Ocean (London): 9.6057 Gb
  • UpCloud (Singapore): 10.115 Gb

Total Transferred Gb

  • Vultr (Sydney): 9.8145 Gb
  • Digital Ocean (London): 24.014 Gb
  • UpCloud (Singapore): 25.287 Gb

Now I can remove test file io benchmark file

sysbench --test=fileio --file-total-size=2=10G cleanup
sysbench 0.4.12:  multi-threaded system evaluation benchmark

Removing test files...

Confirm the test file has been deleted

df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        40G   16G   23G  41% /

Bonus: Benchmark MySQL (on my main server (Vultr) not on Digital Ocean and UpCLoud)

I tried to run a command

sysbench --test=oltp --oltp-table-size=1000000 --db-driver=mysql --mysql-db=test --mysql-user=root --mysql-password=#################################### prepare
sysbench 0.4.12:  multi-threaded system evaluation benchmark

FATAL: unable to connect to MySQL server, aborting...
FATAL: error 1049: Unknown database 'test'
FATAL: failed to connect to database server!

To fix the error I created a test table with Adminer (guide here).

Create Test Table

< Previous – Next >

Read Part 1, Part 2, Part 3 or Part 4

Filed Under: CDN, Cloud, Cloudflare, Digital Ocean, disk, ExactDN, Hosting, Performance, PHP, php72, Scalability, Scalable, Server, Speed, Storage, Ubuntu, UI, UpCloud, VM, Vultr Tagged With: and, can, comparing, Concurrent, cpu, digital ocean, Disk, etc, How, Latency, measure, on, Performance, ubuntu, UpCloud - Part 3 of 4, Users, vm, vultr, you

Measuring VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 2 of 4

June 5, 2018 by Simon

How can you measure VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 2 of 4

Read Part 1, Part 2, Part 3 or Part 4

Measure Disk Performance with Bonnie++

Installing Bonnie++ on Ubuntu

apt-get install bonnie++

Read this. post on using Bonnie++

Benchmark disk IO with DD and Bonnie++

Starting Bonnie++

bonnie++ -d /tmp -r 2048 -u username

Bonnie++ Readme.

Disk io with bonnie++ on Vultr/Sydney

Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.97 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
servername 4G 656 99 308954 68 113706 33 1200 92 188671 30 10237 251
Latency 26067us 119ms 179ms 29139us 26069us 16118us
Version 1.97 ------Sequential Create------ --------Random Create--------
servername -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
Latency 1463us 703us 880us 263us 119us 593us
1.97,1.97,servername,1,1528177870,4G,,656,99,308954,68,113706,33,1200,92,188671,30,10237,251,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,26067us,119ms,179ms,29139us,26069us,16118us,1463us,703us,880us,263us,119us,593us

Disk io with bonnie++ on Digital Ocean/London

Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.97 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
servername 4G 699 99 778636 74 610414 60 1556 99 1405337 59 +++++ +++
Latency 17678us 10099us 17014us 7027us 3067us 2366us
Version 1.97 ------Sequential Create------ --------Random Create--------
servername -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
Latency 1243us 376us 611us 108us 59us 181us
1.97,1.97,servername,1,1528186398,4G,,699,99,778636,74,610414,60,1556,99,1405337,59,+++++,+++,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,17678us,10099us,17014us,7027us,3067us,2366us,1243us,376us,611us,108us,59us,181us

Disk io with bonnie++ on UpCloud/Singapore

Writing a byte at a time...done
Writing intelligently...done
Rewriting...done
Reading a byte at a time...done
Reading intelligently...done
start 'em...done...done...done...done...done...
Create files in sequential order...done.
Stat files in sequential order...done.
Delete files in sequential order...done.
Create files in random order...done.
Stat files in random order...done.
Delete files in random order...done.
Version 1.97 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
servername 4G 1014 99 407179 24 366622 32 2137 99 451886 17 +++++ +++
Latency 11297us 54232us 16443us 4949us 44883us 1595us
Version 1.97 ------Sequential Create------ --------Random Create--------
servername -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
Latency 264us 340us 561us 138us 66us 327us
1.97,1.97,servername,1,1528226703,4G,,1014,99,407179,24,366622,32,2137,99,451886,17,+++++,+++,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,11297us,54232us,16443us,4949us,44883us,1595us,264us,340us,561us,138us,66us,327us

Now read this site on how to make sense of this data

< Previous – Next >

Read Part 1, Part 2, Part 3 or Part 4

Filed Under: CDN, Cloud, Cloudflare, Digital Ocean, disk, Domain, ExactDN, HTTPS, Performance, PHP, php72, Scalability, Scalable, SEO, Ubuntu, UI, UpCloud, VM, Vultr, Wordpress Tagged With: and, can, comparing, Concurrent Users etc, cpu, Digital Ocean and UpCloud - Part 2 of 4, Disk, How, Latency, measure, on, Performance, ubuntu, vm, vultr, you

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 5
  • Go to Next Page »

Primary Sidebar

Poll

What would you like to see more posts about?
Results

Support this Blog

Create your own server today (support me by using these links

Create your own server on UpCloud here ($25 free credit).

Create your own server on Vultr here.

Create your own server on Digital Ocean here ($10 free credit).

Remember you can install the Runcloud server management dashboard here if you need DevOps help.

Advertisement:

Tags

2FA (9) Advice (17) Analytics (9) App (9) Apple (10) AWS (9) Backup (21) Business (8) CDN (8) Cloud (49) Cloudflare (8) Code (8) Development (26) Digital Ocean (13) DNS (11) Domain (27) Firewall (12) Git (7) Hosting (18) HTTPS (6) IoT (9) LetsEncrypt (7) Linux (20) Marketing (11) MySQL (24) NGINX (11) NodeJS (11) OS (10) PHP (13) Scalability (12) Scalable (14) Security (44) SEO (7) Server (26) Software (7) SSH (7) ssl (17) Tech Advice (9) Ubuntu (39) Uncategorized (23) UpCloud (12) VM (44) Vultr (24) Website (14) Wordpress (25)

Disclaimer

Terms And Conditions Of Use All content provided on this "www.fearby.com" blog is for informational purposes only. Views are his own and not his employers. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. Never make changes to a live site without backing it up first.

Advertisement:

Footer

Popular

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Add Google AdWords to your WordPress blog

Security

  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • Setting up DNSSEC on a Namecheap domain hosted on UpCloud using CloudFlare
  • Set up Feature-Policy, Referrer-Policy and Content Security Policy headers in Nginx
  • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
  • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
  • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
  • Beyond SSL with Content Security Policy, Public Key Pinning etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Run an Ubuntu VM system audit with Lynis
  • Securing Ubuntu in the cloud
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider

Code

  • How to code PHP on your localhost and deploy to the cloud via SFTP with PHPStorm by Jet Brains
  • Useful Java FX Code I use in a project using IntelliJ IDEA and jdk1.8.0_161.jdk
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider
  • How to setup PHP FPM on demand child workers in PHP 7.x to increase website traffic
  • Installing Android Studio 3 and creating your first Kotlin Android App
  • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
  • How to use Sublime Text editor locally to edit code files on a remote server via SSH
  • Creating your first Java FX app and using the Gluon Scene Builder in the IntelliJ IDEA IDE
  • Deploying nodejs apps in the background and monitoring them with PM2 from keymetrics.io

Tech

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Is OSX Mojave on a 2014 MacBook Pro slower or faster than High Sierra
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • The case of the overheating Mac Book Pro and Occam’s Razor
  • Useful Linux Terminal Commands
  • Useful OSX Terminal Commands
  • Useful Linux Terminal Commands
  • What is the difference between 2D, 3D, 360 Video, AR, AR2D, AR3D, MR, VR and HR?
  • Application scalability on a budget (my journey)
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.

Wordpress

  • Replacing Google Analytics with Piwik/Matomo for a locally hosted privacy focused open source analytics solution
  • Setting web push notifications in WordPress with OneSignal
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Wordfence Security Plugin for WordPress
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
  • Moving WordPress to a new self managed server away from CPanel
  • Moving WordPress to a new self managed server away from CPanel

General

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Using the WinSCP Client on Windows to transfer files to and from a Linux server over SFTP
  • Connecting to a server via SSH with Putty
  • Setting web push notifications in WordPress with OneSignal
  • Infographic: So you have an idea for an app
  • Restoring lost files on a Windows FAT, FAT32, NTFS or Linux EXT, Linux XFS volume with iRecover from diydatarecovery.nl
  • Building faster web apps with google tools and exceed user expectations
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..

Copyright © 2023 · News Pro on Genesis Framework · WordPress · Log in

Some ads on this site use cookies. You can opt-out if of local analytics tracking by scrolling to the bottom of the front page or any article and clicking "You are not opted out. Click here to opt out.". Accept Reject Read More
GDPR, Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT