• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • Create a VM ($25 Credit)
  • Buy a Domain
  • 1 Month free Back Blaze Backup
  • Other Deals
    • Domain Email
    • Nixstats Server Monitoring
    • ewww.io Auto WordPress Image Resizing and Acceleration
  • About
  • Links

IoT, Code, Security, Server Stuff etc

Views are my own and not my employer's.

Personal Development Blog...

Coding for fun since 1996, Learn by doing and sharing.

Buy a domain name, then create your own server (get $25 free credit)

View all of my posts.

  • Cloud
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to buy a new domain and SSL cert from NameCheap, a Server from Digital Ocean and configure it.
    • Setting up a Vultr VM and configuring it
    • All Cloud Articles
  • Dev
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to setup pooled MySQL connections in Node JS that don’t disconnect
    • NodeJS code to handle App logins via API (using MySQL connection pools (1000 connections) and query parameters)
    • Infographic: So you have an idea for an app
    • All Development Articles
  • MySQL
    • Using the free Adminer GUI for MySQL on your website
    • All MySQL Articles
  • Perf
    • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Measuring VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 1 of 4
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Setting up a website to use Cloudflare on a VM hosted on Vultr and Namecheap
    • All Performance Articles
  • Sec
    • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
    • Using OWASP ZAP GUI to scan your Applications for security issues
    • Setting up the Debian Kali Linux distro to perform penetration testing of your systems
    • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
    • PHP implementation to check a password exposure level with Troy Hunt’s pwnedpasswords API
    • Setting strong SSL cryptographic protocols and ciphers on Ubuntu and NGINX
    • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
    • All Security Articles
  • Server
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All Server Articles
  • Ubuntu
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Useful Linux Terminal Commands
    • All Ubuntu Articles
  • VM
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All VM Articles
  • WordPress
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
    • How to backup WordPress on a host that has CPanel
    • Moving WordPress to a new self managed server away from CPanel
    • Moving a CPanel domain with email to a self managed VPS and Gmail
    • All WordPress Articles
  • All

Server

Securing Ubuntu in the cloud

August 9, 2017 by Simon

It is easy to deploy servers to the cloud within a few minutes, you can have a cloud-based server that you (or others can use). ubuntu has a great guide on setting up basic security issues but what do you need to do.

If you do not secure your server expects it to be hacked into. Below are tips on securing your cloud server.

First, read more on scanning your server with Lynis security scan.

Always use up to date software

Always use update software, malicious users can detect what software you use with sites like shodan.io (or use port scan tools) and then look for weaknesses from well-published lists (e.g WordPress, Windows, MySQL, node, LifeRay, Oracle etc). People can even use Google to search for login pages or sites with passwords in HTML (yes that simple).  Once a system is identified by a malicious user they can send automated bots to break into your site (trying millions of passwords a day) or use tools to bypass existing defences (Security researcher Troy Hunt found out it’s child’s play).

Portscan sites like https://mxtoolbox.com/SuperTool.aspx?action=scan are good for knowing what you have exposed.

You can also use local programs like nmap to view open ports

Instal nmap

sudo apt-get install nmap

Find open ports

nmap -v -sT localhost

Starting Nmap 7.01 ( https://nmap.org ) at 2017-08-08 23:57 AEST
Initiating Connect Scan at 23:57
Scanning localhost (127.0.0.1) [1000 ports]
Discovered open port 80/tcp on 127.0.0.1
Discovered open port 3306/tcp on 127.0.0.1
Discovered open port 22/tcp on 127.0.0.1
Discovered open port 9101/tcp on 127.0.0.1
Discovered open port 9102/tcp on 127.0.0.1
Discovered open port 9103/tcp on 127.0.0.1
Completed Connect Scan at 23:57, 0.05s elapsed (1000 total ports)
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00020s latency).
Not shown: 994 closed ports
PORT     STATE SERVICE
22/tcp   open  ssh
80/tcp   open  http
3306/tcp open  mysql
9101/tcp open  jetdirect
9102/tcp open  jetdirect
9103/tcp open  jetdirect

Read data files from: /usr/bin/../share/nmap
Nmap done: 1 IP address (1 host up) scanned in 0.17 seconds
           Raw packets sent: 0 (0B) | Rcvd: 0 (0B)

Limit ssh connections

Read more here.

Use ufw to set limits on login attempts

sudo ufw limit ssh comment 'Rate limit hit for openssh server'

Only allow known IP’s access to your valuable ports

sudo ufw allow from 123.123.123.123/32 to any port 22

Delete unwanted firewall rules

sudo ufw status numbered
sudo ufw delete 8

Only allow known IP’s to certain ports

sudo ufw allow from 123.123.123.123 to any port 80/tcp

Also, set outgoing traffic to known active servers and ports

sudo ufw allow out from 123.123.123.123 to any port 22

Don’t use weak/common Diffie-Hellman key for SSL certificates, more information here.

openssl req -new -newkey rsa:4096 -nodes -keyout server.key -out server.csr
 
Generating a 4096 bit RSA private key
...

More info on generating SSL certs here and setting here and setting up Public Key Pinning here.

Intrusion Prevention Software

Do run fail2ban: Guide here https://www.linode.com/docs/security/using-fail2ban-for-security

I use iThemes Security to secure my WordPress and block repeat failed logins from certain IP addresses.

iThemes Security can even lock down your WordPress.

You can set iThemes to auto lock out users on x failed logins

Remember to use allowed whitelists though (it is so easy to lock yourself out of servers).

Passwords

Do have strong passwords and change the root password provided by the hosts. https://howsecureismypassword.net/ is a good site to see how strong your password is from brute force password attempts. https://www.grc.com/passwords.htm is a good site to obtain a strong password.  Do follow Troy Hunt’s blog and twitter account to keep up to date with security issues.

Configure a Firewall Basics

You should install a firewall on your Ubuntu and configure it and also configure a firewall with your hosts (e.g AWS, Vultr, Digital Ocean).

Configure a Firewall on AWS

My AWS server setup guide here. AWS allow you to configure the firewall here in the Amazon Console.

Type Protocol Port Range Source Comment
HTTP TCP 80 0.0.0.0/0 Opens a web server port for later
All ICMP ALL N/A 0.0.0.0/0 Allows you to ping
All traffic ALL All 0.0.0.0/0 Not advisable long term but OK for testing today.
SSH TCP 22 0.0.0.0/0 Not advisable, try and limit this to known IP’s only.
HTTPS TCP 443 0.0.0.0/0 Opens a secure web server port for later

Configure a Firewall on Digital Ocean

Configuring a firewall on Digital Ocean (create a $5/m server here).  You can configure your Digital Ocean droplet firewall by clicking Droplet, Networking then Manage Firewall after logging into Digital Ocean.

Configure a Firewall on Vultr

Configuring a firewall on Vultr (create a $2.5/m server here).

Don’t forget to set IP rules for IPV4 and IPV6, Only set the post you need to allow and ensure applications have strong passwords.

Ubuntu has a firewall built in (documentation).

sudo ufw status

Enable the firewall

sudo ufw enable

Adding common ports

sudo ufw allow ssh/tcp
sudo ufw logging on
sudo ufw allow 22
sudo ufw allow 80
sudo ufw allow 53
sudo ufw allow 443
sudo ufw allow 873
sudo ufw enable
sudo ufw status
sudo ufw allow http
sudo ufw allow https

Add a whitelist for your IP (use http://icanhazip.com/ to get your IP) to ensure you won’t get kicked out of your server.

sudo ufw allow from 123.123.123.123/24 to any port 22

More help here.  Here is a  good guide on ufw commands. Info on port numbers here.

https://en.wikipedia.org/wiki/List_of_TCP_and_UDP_port_numbers

If you don’t have a  Digital Ocean server for $5 a month click here and if a $2.5 a month Vultr server here.

Backups

rsync is a good way to copy files to another server or use Bacula

sudo apt install bacula

Basics

Initial server setup guide (Digital Ocean).

Sudo (admin user)

Read this guide on the Linux sudo command (the equivalent if run as administrator on Windows).

Users

List users on an Ubuntu OS (or compgen -u)

cut -d: -f1 /etc/passwd

Common output

cut -d: -f1 /etc/passwd
root
daemon
bin
sys
sync
games
man
lp
mail
news
uucp
proxy
www-data
backup
list
irc
gnats
nobody
systemd-timesync
systemd-network
systemd-resolve
systemd-bus-proxy
syslog
_apt
lxd
messagebus
uuidd
dnsmasq
sshd
pollinate
ntp
mysql
clamav

Add User

sudo adduser new_username

e.g

sudo adduser bob
Adding user `bob' ...
Adding new group `bob' (1000) ...
Adding new user `bob' (1000) with group `bob' ...
Creating home directory `/home/bob' ...
etc..

Add user to a group

sudo usermod -a -G MyGroup bob

Show users in a group

getent group MyGroup | awk -F: '{print $4}'

This will show users in a group

Remove a user

sudo userdel username
sudo rm -r /home/username

Rename user

usermod -l new_username old_username

Change user password

sudo passwd username

Groups

Show all groups

compgen -ug

Common output

compgen -g
root
daemon
bin
sys
adm
tty
disk
lp
mail
proxy
sudo
www-data
backup
irc
etc

You can create your own groups but first, you must be aware of group ids

cat /etc/group

Then you can see your systems groups and ids.

Create a group

groupadd -g 999 MyGroup

Permissions

Read this https://help.ubuntu.com/community/FilePermissions

How to list users on Ubuntu.

Read more on setting permissions here.

Chmod help can be found here.

Install Fail2Ban

I used this guide on installing Fail2Ban.

apt-get install fail2ban

Check Fail2Ban often and add blocks to the firewall of known bad IPs

fail2ban-client status

Best practices

Ubuntu has a guide on basic security setup here.

Startup Processes

It is a good idea to review startup processes from time to time.

sudo apt-get install rcconf
sudo rcconf

Accounts

  • Read up on the concept of least privilege access for apps and services here.
  • Read up on chmod permissions.

Updates

Do update your operating system often.

sudo apt-get update
sudo apt-get upgrade

Minimal software

Only install what software you need

Exploits and Keeping up to date

Do keep up to date with exploits and vulnerabilities

  • Follow 0xDUDE on twitter.
  • Read the GDI.Foundation page.
  • Visit the Exploit Database
  • Vulnerability & Exploit Database
  • Subscribe to the Security Now podcast.

Secure your applications

  • NodeJS: Enable logging in applications you install or develop.

Ban repeat Login attempts with FailBan

Fail2Ban config

sudo nano /etc/fail2ban/jail.conf
[sshd]

enabled  = true
port     = ssh
filter   = sshd
logpath  = /var/log/auth.log
maxretry = 3

Hosts File Hardening

sudo nano /etc/host.conf

Add

order bind,hosts
nospoof on

Add a whitelist with your ip on /etc/fail2ban/jail.conf (see this)

[DEFAULT]
# "ignoreip" can be an IP address, a CIDR mask or a DNS host. Fail2ban will not                          
# ban a host which matches an address in this list. Several addresses can be                             
# defined using space separator.
                                                                         
ignoreip = 127.0.0.1 192.168.1.0/24 8.8.8.8

Restart the service

sudo service fail2ban restart
sudo service fail2ban status

Intrusion detection (logging) systems

Tripwire will not block or prevent intrusions but it will log and give you a heads up with risks and things of concern

Install Tripwire.

sudo apt-get install tiger tripwire

Running Tripwire

sudo tiger

This will scan your system for issues of note

sudo tiger
Tiger UN*X security checking system
   Developed by Texas A&M University, 1994
   Updated by the Advanced Research Corporation, 1999-2002
   Further updated by Javier Fernandez-Sanguino, 2001-2015
   Contributions by Francisco Manuel Garcia Claramonte, 2009-2010
   Covered by the GNU General Public License (GPL)

Configuring...

Will try to check using config for 'x86_64' running Linux 4.4.0-89-generic...
--CONFIG-- [con005c] Using configuration files for Linux 4.4.0-89-generic. Using
           configuration files for generic Linux 4.
Tiger security scripts *** 3.2.3, 2008.09.10.09.30 ***
20:42> Beginning security report for simon.
20:42> Starting file systems scans in background...
20:42> Checking password files...
20:42> Checking group files...
20:42> Checking user accounts...
20:42> Checking .rhosts files...
20:42> Checking .netrc files...
20:42> Checking ttytab, securetty, and login configuration files...
20:42> Checking PATH settings...
20:42> Checking anonymous ftp setup...
20:42> Checking mail aliases...
20:42> Checking cron entries...
20:42> Checking 'services' configuration...
20:42> Checking NFS export entries...
20:42> Checking permissions and ownership of system files...
--CONFIG-- [con010c] Filesystem 'fuse.lxcfs' used by 'lxcfs' is not recognised as a valid filesystem
20:42> Checking for indications of break-in...
--CONFIG-- [con010c] Filesystem 'fuse.lxcfs' used by 'lxcfs' is not recognised as a valid filesystem
20:42> Performing rootkit checks...
20:42> Performing system specific checks...
20:46> Performing root directory checks...
20:46> Checking for secure backup devices...
20:46> Checking for the presence of log files...
20:46> Checking for the setting of user's umask...
20:46> Checking for listening processes...
20:46> Checking SSHD's configuration...
20:46> Checking the printers control file...
20:46> Checking ftpusers configuration...
20:46> Checking NTP configuration...
20:46> Waiting for filesystems scans to complete...
20:46> Filesystems scans completed...
20:46> Performing check of embedded pathnames...
20:47> Security report completed for simon.
Security report is in `/var/log/tiger/security.report.simon.170809-20:42'.

My Output.

sudo nano /var/log/tiger/security.report.username.170809-18:42

Security scripts *** 3.2.3, 2008.09.10.09.30 ***
Wed Aug  9 18:42:24 AEST 2017
20:42> Beginning security report for username (x86_64 Linux 4.4.0-89-generic).

# Performing check of passwd files...
# Checking entries from /etc/passwd.
--WARN-- [pass014w] Login (bob) is disabled, but has a valid shell.
--WARN-- [pass014w] Login (root) is disabled, but has a valid shell.
--WARN-- [pass015w] Login ID sync does not have a valid shell (/bin/sync).
--WARN-- [pass012w] Home directory /nonexistent exists multiple times (3) in
         /etc/passwd.
--WARN-- [pass012w] Home directory /run/systemd exists multiple times (2) in
         /etc/passwd.
--WARN-- [pass006w] Integrity of password files questionable (/usr/sbin/pwck
         -r).

# Performing check of group files...

# Performing check of user accounts...
# Checking accounts from /etc/passwd.
--WARN-- [acc021w] Login ID dnsmasq appears to be a dormant account.
--WARN-- [acc022w] Login ID nobody home directory (/nonexistent) is not
         accessible.

# Performing check of /etc/hosts.equiv and .rhosts files...

# Checking accounts from /etc/passwd...

# Performing check of .netrc files...

# Checking accounts from /etc/passwd...

# Performing common access checks for root (in /etc/default/login, /securetty, and /etc/ttytab...
--WARN-- [root001w] Remote root login allowed in /etc/ssh/sshd_config

# Performing check of PATH components...
--WARN-- [path009w] /etc/profile does not export an initial setting for PATH.
# Only checking user 'root'

# Performing check of anonymous FTP...

# Performing checks of mail aliases...
# Checking aliases from /etc/aliases.

# Performing check of `cron' entries...
--WARN-- [cron005w] Use of cron is not restricted

# Performing check of 'services' ...
# Checking services from /etc/services.
--WARN-- [inet003w] The port for service ssmtp is also assigned to service
         urd.
--WARN-- [inet003w] The port for service pipe-server is also assigned to
         service search.

# Performing NFS exports check...

# Performing check of system file permissions...
--ALERT-- [perm023a] /bin/su is setuid to `root'.
--ALERT-- [perm023a] /usr/bin/at is setuid to `daemon'.
--ALERT-- [perm024a] /usr/bin/at is setgid to `daemon'.
--WARN-- [perm001w] The owner of /usr/bin/at should be root (owned by daemon).
--WARN-- [perm002w] The group owner of /usr/bin/at should be root.
--ALERT-- [perm023a] /usr/bin/passwd is setuid to `root'.
--ALERT-- [perm024a] /usr/bin/wall is setgid to `tty'.

# Checking for known intrusion signs...
# Testing for promiscuous interfaces with /bin/ip
# Testing for backdoors in inetd.conf

# Performing check of files in system mail spool...

# Performing check for rookits...
# Running chkrootkit (/usr/sbin/chkrootkit) to perform further checks...
--WARN-- [rootkit004w] Chkrootkit has detected a possible rootkit installation
Possible Linux/Ebury - Operation Windigo installetd

# Performing system specific checks...
# Performing checks for Linux/4...

# Checking boot loader file permissions...
--WARN-- [boot02] The configuration file /boot/grub/menu.lst has group
         permissions. Should be 0600
--FAIL-- [boot02] The configuration file /boot/grub/menu.lst has world
         permissions. Should be 0600
--WARN-- [boot06] The Grub bootloader does not have a password configured.

# Checking for vulnerabilities in inittab configuration...

# Checking for correct umask settings for init scripts...
--WARN-- [misc021w] There are no umask entries in /etc/init.d/rcS

# Checking Logins not used on the system ...

# Checking network configuration
--FAIL-- [lin013f] The system is not protected against Syn flooding attacks
--WARN-- [lin017w] The system is not configured to log suspicious (martian)
         packets

# Verifying system specific password checks...

# Checking OS release...
--WARN-- [osv004w] Unreleased Debian GNU/Linux version `stretch/sid'

# Checking installed packages vs Debian Security Advisories...

# Checking md5sums of installed files

# Checking installed files against packages...
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.dep' does not
         belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.alias.bin' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.devname' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.softdep' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.alias' does not
         belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.symbols.bin'
         does not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.builtin.bin'
         does not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.symbols' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-87-generic/modules.dep.bin' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.dep' does not
         belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.alias.bin' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.devname' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.softdep' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.alias' does not
         belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.symbols.bin'
         does not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.builtin.bin'
         does not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.symbols' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/modules/4.4.0-89-generic/modules.dep.bin' does
         not belong to any package.
--WARN-- [lin001w] File `/lib/udev/hwdb.bin' does not belong to any package.

# Performing check of root directory...

# Checking device permissions...
--WARN-- [dev003w] The directory /dev/block resides in a device directory.
--WARN-- [dev003w] The directory /dev/char resides in a device directory.
--WARN-- [dev003w] The directory /dev/cpu resides in a device directory.
--FAIL-- [dev002f] /dev/fuse has world permissions
--WARN-- [dev003w] The directory /dev/hugepages resides in a device directory.
--FAIL-- [dev002f] /dev/kmsg has world permissions
--WARN-- [dev003w] The directory /dev/lightnvm resides in a device directory.
--WARN-- [dev003w] The directory /dev/mqueue resides in a device directory.
--FAIL-- [dev002f] /dev/rfkill has world permissions
--WARN-- [dev003w] The directory /dev/vfio resides in a device directory.

# Checking for existence of log files...
--FAIL-- [logf005f] Log file /var/log/btmp permission should be 660
--FAIL-- [logf007f] Log file /var/log/messages does not exist

# Checking for correct umask settings for user login shells...
--WARN-- [misc021w] There is no umask definition for the dash shell
--WARN-- [misc021w] There is no umask definition for the bash shell

# Checking symbolic links...

# Performing check of embedded pathnames...
20:47> Security report completed for username.

More on Tripwire here.

Hardening PHP

Hardening PHP config (and backing the PHP config it up), first create an info.php file in your website root folder with this info

<?php
phpinfo()
?>

Now look for what PHP file is loadingPHP Config

Back that your PHP config file

TIP: Delete the file with phpinfo() in it as it is a security risk to leave it there.

TIP: Read the OWASP cheat sheet on using PHP securely here and securing php.ini here.

Some common security changes

file_uploads = On
expose_php = Off
error_reporting = E_ALL
display_errors          = Off
display_startup_errors  = Off
log_errors              = On
error_log = /php_errors.log
ignore_repeated_errors  = Off

Don’t forget to review logs, more config changes here.

Antivirus

Yes, it is a good idea to run antivirus in Ubuntu, here is a good list of antivirus software

I am installing ClamAV as it can be installed on the command line and is open source.

sudo apt-get install clamav

ClamAV help here.

Scan a folder

sudo clamscan --max-filesize=3999M --max-scansize=3999M --exclude-dir=/www/* -i -r /

Setup auto-update antivirus definitions

sudo dpkg-reconfigure clamav-freshclam

I set auto updates 24 times a day (every hour) via daemon updates.

tip: Download manual antivirus update definitions. If you only have a 512MB server your update may fail and you may want to stop fresh claim/php/nginx and mysql before you update to ensure the antivirus definitions update. You can move this to a con job and set this to update at set times over daemon to ensure updates happen.

sudo /etc/init.d/clamav-freshclam stop

sudo service php7.0-fpm stop
sudo /etc/init.d/nginx stop
sudo /etc/init.d/mysql stop

sudo freshclam -v
Current working dir is /var/lib/clamav
Max retries == 5
ClamAV update process started at Tue Aug  8 22:22:02 2017
Using IPv6 aware code
Querying current.cvd.clamav.net
TTL: 1152
Software version from DNS: 0.99.2
Retrieving http://db.au.clamav.net/main.cvd
Trying to download http://db.au.clamav.net/main.cvd (IP: 193.1.193.64)
Downloading main.cvd [100%]
Loading signatures from main.cvd
Properly loaded 4566249 signatures from new main.cvd
main.cvd updated (version: 58, sigs: 4566249, f-level: 60, builder: sigmgr)
Querying main.58.82.1.0.C101C140.ping.clamav.net
Retrieving http://db.au.clamav.net/daily.cvd
Trying to download http://db.au.clamav.net/daily.cvd (IP: 193.1.193.64)
Downloading daily.cvd [100%]
Loading signatures from daily.cvd
Properly loaded 1742284 signatures from new daily.cvd
daily.cvd updated (version: 23644, sigs: 1742284, f-level: 63, builder: neo)
Querying daily.23644.82.1.0.C101C140.ping.clamav.net
Retrieving http://db.au.clamav.net/bytecode.cvd
Trying to download http://db.au.clamav.net/bytecode.cvd (IP: 193.1.193.64)
Downloading bytecode.cvd [100%]
Loading signatures from bytecode.cvd
Properly loaded 66 signatures from new bytecode.cvd
bytecode.cvd updated (version: 308, sigs: 66, f-level: 63, builder: anvilleg)
Querying bytecode.308.82.1.0.C101C140.ping.clamav.net
Database updated (6308599 signatures) from db.au.clamav.net (IP: 193.1.193.64)

sudo service php7.0-fpm restart
sudo /etc/init.d/nginx restart
sudo /etc/init.d/mysql restart 

sudo /etc/init.d/clamav-freshclam start

Manual scan with a bash script

Create a bash script

mkdir /script
sudo nano /scripts/updateandscanav.sh

# Include contents below.
# Save and quit

chmod +X /scripts/updateandscanav.sh

Bash script contents to update antivirus definitions.

sudo /etc/init.d/clamav-freshclam stop

sudo service php7.0-fpm stop
sudo /etc/init.d/nginx stop
sudo /etc/init.d/mysql stop

sudo freshclam -v

sudo service php7.0-fpm restart
sudo /etc/init.d/nginx restart
sudo /etc/init.d/mysql restart

sudo /etc/init.d/clamav-freshclam start

sudo clamscan --max-filesize=3999M --max-scansize=3999M -v -r /

Edit the crontab to run the script every hour

crontab -e
1 * * * * /bin/bash /scripts/updateandscanav.sh > /dev/null 2>&1

Uninstalling Clam AV

You may need to uninstall Clamav if you don’t have a lot of memory or find updates are too big.

sudo apt-get remove --auto-remove clamav
sudo apt-get purge --auto-remove clamav

Setup Unattended Ubuntu Security updates

sudo apt-get install unattended-upgrades
sudo unattended-upgrades -d

At login, you should receive

0 updates are security updates.

Other

  • Read this awesome guide.
  • install Fail2Ban
  • Do check your log files if you suspect suspicious activity.

Check out the extensive Hardening a Linux Server guide at thecloud.org.uk: https://thecloud.org.uk/wiki/index.php?title=Hardening_a_Linux_Server

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.92 added hardening a linux server link

Filed Under: Ads, Advice, Analitics, Analytics, Android, API, App, Apple, Atlassian, AWS, Backup, BitBucket, Blog, Business, Cache, Cloud, Community, Computer, CoronaLabs, Cost, CPI, DB, Development, Digital Ocean, DNS, Domain, Email, Feedback, Firewall, Free, Git, GitHub, GUI, Hosting, Investor, IoT, JIRA, LetsEncrypt, Linux, Malware, Marketing, mobile app, Monatization, Monetization, MongoDB, MySQL, Networking, NGINX, NodeJS, NoSQL, OS, Planning, Project, Project Management, Psychology, push notifications, Raspberry Pi, Redis, Route53, Ruby, Scalability, Scalable, Security, SEO, Server, Share, Software, ssl, Status, Strength, Tech Advice, Terminal, Transfer, Trello, Twitter, Ubuntu, Uncategorized, Video Editing, VLOG, VM, Vultr, Weakness, Web Design, Website, Wordpress Tagged With: antivirus, brute force, Firewall

Moving a CPanel domain with email to a self managed VPS and Gmail

August 3, 2017 by Simon

Below is my guide for moving away from NetRegistry CPanel domain to a self-managed server and GSuite email.

I have had www.fearby.com since 1999 on three CPanel hosts (superwerbhost in the US, Jumba in Australia, Uber in Australia (NetRegistry have acquired Uber and performance at the time of writing is terrible)). I was never informed by Uber of the sale but my admin portal was moved from one host to another and each time performance degraded. I tried to speed up WordPress by optimizing images, installing cache plugins but nothing worked, pages were loading in around 24 seconds on https://www.webpagetest.org.

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

I had issues with a CPanel domain on the hosts (Uber/Netregistry) as they were migrating domains and the NetRegstry chat rep said I needed to phone Uber for support. No thanks, I’m going self-managed and saving a dollar.

I decided to take ownership of my slow domain and setup my own VM and direct web traffic to it and redirect email to GMail (I have done this before).  I have setup Digital Ocean VM’s (Ubuntu and Centos), Vultr VM’s and AWS VM’s.

I have also had enough of Resource Limit Reached messages with CPanel and I can’t wait to…

  • not have a slow WordPress.
  • setup my own server (not a slow hosts server).
  • spend $5 less (we currently pay $25 for a CPanel website with 20GB storage total)
  • get a faster website (sub 24 seconds load time).
  • larger email mailboxes (30GB each).
  • Generate my own “SSL Labs A+ rated” certificate for $10 a year instead of $150 a year for an “SSL Labs C rated” SSL certificate from my existing hosts.

Backup

I have about 10 email accounts on my CPanel domain (using 14GB) and 2x WordPress sites.  I want to backup my emails with (Outlook Export and Thunderbird Profile backup) and backup my domain file(s) a few times before I do anything.  Once DNS is set in motion no server waits.

The Plan

Once everything is backed up I intend to setup a $5 a month Vulr VM and redirect all mail to Google G Suite (I have redirected mail before).

I will setup a Vultr web server in Sydney (following my guide here), buy an  SSL certificate from Namecheap and move my WordPress sites.

Rough Plan

  • Reduce email accounts from 10x to 3x
  • Backup emails (twice with ThunderBird and Outlook).
  • Setup A Ubuntu V on Vultr.
  • Signup for Google G Suite Trial.
  • Transfer my domain to Namecheap.
  • Link to domain DNS to Vultr
  • Link to domain MX records to Google Email.
  • Transfer website.
  • Setup emails on google.
  • Restore WordPress.
  • Go live.
  • Downgrade to personal G Suite before the trial expires
  • Close down the old server.

Signing up for Google G Suite

I visited https://gsuite.google.com/ and started creating an account.

Get 20% off your first year by signing up for Google G Suite using this link: https://goo.gl/6vpuMm

Screenshots of Google G Suite setup

I created a link between G Suite and an existing GMail account.

More screenshots of Google G suite setup

Now I can create the admin account.

Picture of G suite asking how i will log in

Tip: Don’t use any emails that are linked as secondary emails with any Google services (this won’t be allowed). It’s s a well-known issue that you cannot add users emails who are linked to Google services (even as backup emails for Gmail, detach the email before adding it). Read more here.

Google G suite did not like my email provided

Final setup steps.

Final G suite setup screenshots.

Now I can add email accounts to G Suite.

G Suite said im all ready to add users

Adding email users to G Suite.

G Suite adding users

The next thing I had to do was upload a file to my domain to verify I own the domain (DNS verification is also an option).

I must say the setup and verify steps are quite easy to follow on G Suite.

Time to backup our existing CPanel site.

Screenshot of Cpanel users

Backup Step 1 (hopefully I won’t need this)

I decided to grab a complete copy of my CPanel domain with domains, databases and email accounts. This took 24 hours.

CPanel backup screenshot

Backup Step 2 (hopefully I won’t need this)

I download all mail via IMAP in Outlook and Mozilla Thunderbird and export it (Outlook Export and Thunderbird Profile backup). Google have IMAP instructions here.

DNS Changes at Namecheap

I obtained my domain EPP code from my CPanel hosts and transferred the domain name to Namecheap.

Namecheap was even nice enough to set my DNS point to my existing domain so I did not have to rush a move before DNS propagation.

P.S The Namecheap Chat Staff and Namecheap  Mobile App is awesome.

NameCheap DNS

Having backed up everything I logged into Namecheap and set my DNS to “NameCheap BasicDNS” and then went “Advanced DNS” and set appropriate DNS records for my domain. This assumes you have setup a VM with IPV4 and IPV6 (follow my guide here).

  • A Record @ IPV4_OF_MY_VULTR_SERVER
  • A Record www IPV4_OF_MY_VULTR_SERVER
  • A Record ftp IPV4_OF_MY_VULTR_SERVER
  • AAAA Record @ IPV6_OF_MY_VULTR_SERVER
  • AAAA Record www IPV6_OF_MY_VULTR_SERVER
  • AAAA Record ftp IPV6_OF_MY_VULTR_SERVER
  • C Name www fearby.com

The Google G Suite also asked me to add these following MX records to the DNS records.

  • MX Record @ ASPMX.L.GOOGLE.COM. 1
  • MX Record @ ASPMX1.L.GOOGLE.COM. 5
  • MX Record @ ASPMX2.L.GOOGLE.COM. 5
  • MX Record @ ASPMX3.L.GOOGLE.COM. 10
  • MX Record @ ASPMX4.L.GOOGLE.COM. 10

Then it was a matter of telling Google DNS changes were made (once DNS has replicated across the US).

My advice is to set DNS changes before bed as it can take 12 hours.

Sites like https://www.whatsmydns.net/ are great for keeping track of DNS replication.

Transferring WordPress

I logged into the CPanel and exported my WordPress Database (34MB SQL file).

I had to make the following PHP.ini changes to allow the larger file size restore uploads with the Adminer utility (default is 2mb). I could not get the server side adminer.sls.gz option to restore the database?

post_max_size = 50M
upload_max_filesize = 50M

# do change back to 2MB after you restore the files to prevent DOS attacks.

I had to make the following changes to nginx.conf (to prevent 404 errors on the database upload)

client_max_body_size 50M;
# client_max_body_size 2M; Reset when done

I also had to make these changes to NGINX (sites-available/default) to allow WordPress to work

# Add index.php to the list if you are using PHP
	index index.php index.html index.htm;

location / {
        # try_files $uri $uri/ =404;
        try_files $uri $uri/ /index.php?q=$uri&$args;
        index index.php index.html index.htm;
        proxy
}

I had a working MySQL (I followed my guide here).

Adminer is the best PHP MySQL management utility (beats PhpMyAdmin hands down).

Restart NGINX and PHP

nginx -t
nginx -s reload
sudo /etc/init.d/nginx restart
sudo service php7.0-fpm restart

I had an error on database import, a non-descript error in script line 1 (error hint here).

A simple search and replace in the SQL fixed it.

Once I had increased PHP uploads to 50M and Nginx I was able to upload my database backup with Adminer  (just remember to import to the created database that matches. the wp-config.php. Also, ensure your WordPress content is in place too.

The only other problem I had was WordPress gave an “Error 500” so moved   few plugins an all was good.

Importing Old Email

I was able to use the Google G Suite tools to import my old Mail (CPanel IMAP to Google IMAP).

Import IMAP mail to GMail

I love root access on my own server now, goodbye CPanel “Usage Limit Exceeded” errors (I only had light traffic on my site).

My self-hosted WordPress is a lot snappier now, my server has plenty of space (and only costs $0.007c and hour for 1x CPU, 1GB ram, 25GB SSD storage and 1000GB data transfer quota). I use the htop command to view system processor and disk space usage.

I can now have more space for content and not be restricted by tight hosts disk quotas or slow shared servers.  I use the pydf command to view dis space.

pydf
Filesystem Size  Used

Avail

 Use%                                                    Mounted on
/dev/vda1   25G 3289M

20G

 13.1 [######..........................................] /
/www/wp-content#

I use ncdu to view folder usage.

Installing ncdu

sudo apt-get install ncdu
Reading package lists... Done
Building dependency tree
Reading state information... Done
ncdu is already the newest version (1.11-1build1).
0 upgraded, 0 newly installed, 0 to remove and 58 not upgraded.

Type ncdu in the folder you want to browse under.

ncdu

You can arrow up and down folder structures and view folder/file usage.

SSL Certificate

I am setting up a new multi year SS cert now, I will update this guide later.  I had to read my SSL guide with Digital Ocean here.

I generated some certificate on my server

cd ~/
kdir sslcsrmaster4096
cd sslcsrmaster4096/
openssl req -new -newkey rsa:4096 -nodes -keyout domain.key -out domain.csr

Sample output for  a new certificate

openssl req -new -newkey rsa:4096 -nodes -keyout dummy.key -out dummy.csr
Generating a 4096 bit RSA private key
.................................................................................................++
......++
writing new private key to 'dummy.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]: AU
State or Province Name (full name) [Some-State]: NSW
Locality Name (eg, city) []:Tamworth
Organization Name (eg, company) [Internet Widgits Pty Ltd]: Dummy Org
Organizational Unit Name (eg, section) []: Dummy Org Dept
Common Name (e.g. server FQDN or YOUR name) []: DummyOrg
Email Address []: [email protected]

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []: password
An optional company name []: DummyCO
[email protected]:~/sslcsrmaster4096# cat dummy.csr
-----BEGIN CERTIFICATE REQUEST-----
MIIFAjCCAuoCAQAwgYsxCzAJBgNVBAYTAkFVMQwwCgYDVQQIDANOU1cxETAPBgNV
BAcMCFRhbXdvcnRoMRIwEAYDVQQKDAlEdW1teSBPcmcxFzAVBgNVBAsMDkR1bW15
IE9yZyBEZXB0MREwDwYDVQQDDAhEdW1teU9yZzEbMBkGCSqGSIb3DQEJARYMbWVA
ZHVtbXkub3JnMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA6PUtWkRl
+gL0Hx354YuJ5Sul2Xh+ljILSlFoHAxktKlE+OJDJAtUtVQpo3/F2rGTJWmmtef+
shortenedoutput
swrUzpBv8hjGziPoVdd8qdAA2Gh/Y5LsehQgyXV1zGgjsi2GN4A=
-----END CERTIFICATE REQUEST-----

I then uploaded the certificate to Namecheap for an SSL cert registration.

I selected DNS C Name record as a way to verify I own my domain.

I am now waiting for Namecheap to verify my domain

End of the Google G Suite Business Trial

Before the end of the 14-day trial, you will need to add billing details to keep the email working.

At this stage, you can downgrade from a $10/m business account per user to a $5/m per user account if you wish. The only loss would be storage and google app access.

Get 20% off your first year by signing up for Google G Suite using this link: https://goo.gl/6vpuMm

Before your trial ends, add your payment details and downgrade from $10/user a month business prices to $5/iser a month individual if needed.

G Suite Troubleshooting

I was able to access new G Suite email account via gmail.com but not via Outlook 2015? I reset the password, followed the google troubleshooting guide and used the official incoming and outgoing settings but nothing worked.

troubleshooting 1

Google phone support suggested I enable less secure connection settings as Google firewall may be blocking Outlook. I know the IMAP RFC is many years old but I doubt Microsoft are talking to G Suite in a lazy manner.

Now I can view my messages and I can see one email that said I was blocked by the firewall. Google phone support and faqs don’t say why Outlook 2015 SSL based IMAP was blocked?

past email

Conclusion

Thanks to my wife who put up with my continual updates over the entire domain move. Voicing the progress helped me a lot.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

V1.8 added ad link

Filed Under: Advice, DNS, MySQL, OS, Server, Ubuntu, VM, Vultr, Website, Wordpress Tagged With: C Name, DNS, gmail, mx, server, ubuntu, vm, VPS, Vulty

Setting up a Vultr VM and configuring it

July 29, 2017 by Simon

Below is my guide on setting up a Vultr VM and configuring it with a static IP, NGINX, MySQL, PHP and an SSL certificate.

I have blogged about setting up Centos and Ubuntu server on Digital Ocean before.  Digital Ocean does not have data centres in Australia and this kills scalability.  AWS is good but 4x the price of Vultr. I have also blogged about setting up and AWS server here. I tried to check out Alibaba Cloud but the verification process was broken so I decided to check our Vultr.

Update (June 2018): I don’t use Vultr anymore, I moved my domain to UpCloud (they are that awesome). Use this link to signup and get $25 free credit. Read the steps I took to move my domain to UpCloud here.

UpCloud is way faster.

Upcloud Site Speed in GTMetrix

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Setting up a  Vultr Server

1) Go to http://www.vultr.com/?ref=7192231 and create your own server today.

2) Create an account at Vultr.

Vultr signup

3) Add a  Credit Card

Vultr add cc

4) Verify your email address, Check https://my.vultr.com/promo/ for promos.

5) Create your first instance (for me it was an Ubuntu 16.04 x64 Server,  2 CPU, 4Gb RAM, 60Gb SSD, 3,000MB Transfer server in Sydney for $20 a month). I enabled IPV6, Private Networking, and  Sydney as the server location. Digital Ocean would only have offered 2GB ram and 40GB SSD at this price.  AWS would have charged $80/w.

Vultr deploy vm

2 Cores and 4GB ram is what I am after (I will use it for NGINX, MySQL, PHP, MongoDB, OpCache and Redis etc).

Vultr 20 month

6) I followed this guide and generated an SSH key and added it to Vultr. I generated a local SSH key and added it to Vultr

snip

cd ~/.ssh
ls-al
ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/Users/username/.ssh/id_rsa): vultr_rsa    
Enter passphrase (empty for no passphrase): 
Enter same passphrase again: 
Your identification has been saved in vultr_rsa.
Your public key has been saved in vultr_rsa.pub.
cat vultr_rsa.pub 
ssh-rsa AAAAremovedoutput

Vultr add ssh key

7) I was a bit confused if the UI adding the SSH key to the in progress deploy server screen (the SSH key was added but was not highlighted so I recreated the server to deploy and the SSH key now appears).

Vultr ass ssh key 2

Now time to deploy the server.

Vultr deploy now

Deploying now.

Vultr my servers

My Vultr server is now deployed.

Vultr server information

I connected to it with my SSH program on my Mac.

Vultr ssh

Now it is time to redirect my domain (purchased through Namecheap) to the new Vultr server IP.

DNS: @ A Name record at Namecheap

Vultr namecheap

Update: I forgot to add an A Name for www.

Vultr namecheap 2

DNS: Vultr (added the Same @ and www A Name records (fyi “@” was replaced with “”)).

Vultr dns

I waited 60 minutes and DNS propagation happened. I used the site https://www.whatsmydns.net to see where the DNS replication was and I was receiving an error.

Setting the Serves Time, and Timezone (Ubuntu)

I checked the time on zone  server but it was wrong (20 hours behind)

sudo hwclock --show
Tue 25 Jul 2017 01:29:58 PM UTC  .420323 seconds

I manually set the timezone to Sydney Australia.

dpkg-reconfigure tzdata

I installed the NTP time syncing service

sudo apt-get install ntp

I configured the NTP service to use Australian servers (read this guide).

sudo nano /etc/ntp.conf

# added
server 0.au.pool.ntp.org
server 1.au.pool.ntp.org
server 2.au.pool.ntp.org

I checked the time after restarting NTP.

sudo service ntp restart
sudo hwclock --show

The time is correct 🙂

Installing NGINX Web Server Webserver   (Ubuntu)

More on the differences between

Apache and nginx web servers

.
sudo add-apt-repository ppa:chris-lea/nginx-devel
sudo apt-get update
sudo apt-get install nginx
sudo service nginx start
nginx -v

Installing NodeJS  (Ubuntu)

curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
sudo apt-get install -y nodejs
nodejs -v

Installing MySQL  (Ubuntu)

sudo apt-get install mysql-common
sudo apt-get install mysql-server
mysql --version
>mysql Ver 14.14 Distrib 5.7.19, for Linux (x86_64) using EditLine wrapper
sudo mysql_secure_installation
>Y (Valitate plugin)
>2 (Strong passwords)
>N (Don't chnage root password)
>Y (Remove anon accounts)
>Y (No remote root login)
>Y (Remove test DB)
>Y (Reload)
service mysql status
> mysql.service - MySQL Community Serve

Install PHP 7.x and PHP7.0-FPM  (Ubuntu)

sudo apt-get install -y language-pack-en-base
sudo LC_ALL=en_US.UTF-8 add-apt-repository ppa:ondrej/php
sudo apt-get update
sudo apt-get install php7.0
sudo apt-get install php7.0-mysql
sudo apt-get install php7.0-fpm

php.ini

sudo nano /etc/php/7.0/fpm/php.ini
> edit: cgi.fix_pathinfo=0
> edit: upload_max_filesize = 8M
> edit: max_input_vars = 1000
> edit: memory_limit = 128M
# medium server: memory_limit = 256M
# large server: memory_limit = 512M

Restart PHP

sudo service php7.0-fpm restart	
service php7.0-fpm status

Now install misc helper modules into php 7 (thanks to this guide)

sudo apt-get install php-xdebug
sudo apt-get install php7.0-phpdbg php7.0-mbstring php7.0-gd php7.0-imap 
sudo apt-get install php7.0-ldap php7.0-pgsql php7.0-pspell php7.0-recode 
sudo apt-get install php7.0-snmp php7.0-tidy php7.0-dev php7.0-intl 
sudo apt-get install php7.0-gd php7.0-curl php7.0-zip php7.0-xml
sudo nginx –s reload
sudo /etc/init.d/nginx restart
sudo service php7.0-fpm restart
php -v

Initial NGINX Configuring – Pre SSL and Security (Ubuntu)

Here is a good guide on setting up NGINX for performance.

mkdir /www

Edit the NGINX configuration

sudo nano /etc/nginx/nginx.conf

File Contents: /etc/nginx/nginx.conf

# https://github.com/denji/nginx-tuning
user www-data;
worker_processes auto;
worker_cpu_affinity auto;
pid /run/nginx.pid;
worker_rlimit_nofile 100000;
error_log /var/log/nginx/nginxcriterror.log crit;

events {
        worker_connections 4000;
        use epoll;
        multi_accept on;
}

http {

        limit_conn conn_limit_per_ip 10;
        limit_req zone=req_limit_per_ip burst=10 nodelay;

        # copies data between one FD and other from within the kernel faster then read() + write()
        sendfile on;

        # send headers in one peace, its better then sending them one by one
        tcp_nopush on;

        # don't buffer data sent, good for small data bursts in real time
        tcp_nodelay on;

        # reduce the data that needs to be sent over network -- for testing environment
        gzip on;
        gzip_min_length 10240;
        gzip_proxied expired no-cache no-store private auth;
        gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/json application/xml;
        gzip_disable msie6;

        # allow the server to close connection on non responding client, this will free up memory
        reset_timedout_connection on;


        # if client stop responding, free up memory -- default 60
        send_timeout 2;

        # server will close connection after this time -- default 75
        keepalive_timeout 30;

        # number of requests client can make over keep-alive -- for testing environment
        keepalive_requests 100000;

        # Security
        server_tokens off;

        # limit the number of connections per single IP
        limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;

       # if the request body size is more than the buffer size, then the entire (or partial) request body is written into a temporary file
        client_body_buffer_size  128k;

        # headerbuffer size for the request header from client -- for testing environment
        client_header_buffer_size 3m;


        # to boost I/O on HDD we can disable access logs
        access_log off;

        # cache informations about FDs, frequently accessed files
        # can boost performance, but you need to test those values
        open_file_cache max=200000 inactive=20s;
        open_file_cache_valid 30s;
        open_file_cache_min_uses 2;
        open_file_cache_errors on;

        # maximum number and size of buffers for large headers to read from client request
        large_client_header_buffers 4 256k;

        # read timeout for the request body from client -- for testing environment
        client_body_timeout   3m;

       # how long to wait for the client to send a request header -- for testing environment
        client_header_timeout 3m;
        types_hash_max_size 2048;
        # server_tokens off;

        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        ssl_prefer_server_ciphers on;

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;


        # gzip_vary on;
        # gzip_proxied any;
        # gzip_comp_level 6;
        # gzip_buffers 16 8k;
        # gzip_http_version 1.1;
        # gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

File Contents: /etc/nginx/sites-available/default

proxy_cache_path /tmp/nginx-cache keys_zone=one:10m;
 
server {
        # listen [::]:80 default_server ipv6only=on; ## listen for ipv6
 
        access_log /var/log/nginx/myservername.com.log;
 
        root /usr/share/nginx/www;
        index index.php index.html index.htm;
 
        server_name www.myservername.com myservername.com localhost;
 
        # ssl on;
        # ssl_certificate /etc/nginx/ssl/cert_chain.crt;
        # ssl_certificate_key /etc/nginx/ssl/myservername.key;
        # ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";              # disable some old ciphers
        # ssl_prefer_server_ciphers on;
        # ssl_dhparam /etc/nginx/ssl/dhparams.pem;
        # ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        # server_tokens off;
        # ssl_session_cache shared:SSL:40m;                                           # More info: http://nginx.com/blog/improve-seo-https-nginx/
        # Set SSL caching and storage/timeout values:
        # ssl_session_timeout 4h;
        # ssl_session_tickets off; # Requires nginx >= 1.5.9
        # OCSP (Online Certificate Status Protocol) is a protocol for checking if a SSL certificate has been revoked
        # ssl_stapling on; # Requires nginx >= 1.3.7
        # ssl_stapling_verify on; # Requires nginx => 1.3.7
        # add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";
 
        # add_header X-Frame-Options DENY;                                            # Prevent Clickjacking
 
        # Prevent MIME Sniffing
        # add_header X-Content-Type-Options nosniff;
 
 
        # Use Google DNS
        # resolver 8.8.8.8 8.8.4.4 valid=300s;
        # resolver_timeout 1m;
 
        # This is handled with the header above.
        # rewrite ^/(.*) https://myservername.com/$1 permanent;
 
        location / {
                try_files $uri $uri/ =404;
                index index.php index.html index.htm;
                proxy_set_header Proxy "";
        }
 
        fastcgi_param PHP_VALUE "memory_limit = 512M";
 
        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        location ~ \.php$ {
                try_files $uri =404;
 
                # include snippets/fastcgi-php.conf;
 
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
                fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
 
                # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
                # With php5-cgi alone:
                # fastcgi_pass 127.0.0.1:9000;
        }
 
        # deny access to .htaccess files, if Apache's document root
        #location ~ /\.ht {
        #       deny all;
        #}
}

I talked to Dmitriy Kovtun (SSL CS) on the Namecheap Chat to resolve a  privacy error (I stuffed up and I am getting the error “Your connection is not private” and “NET::ERR_SSL_PINNED_KEY_NOT_IN_CERT_CHAIN”).

Vultr chrome privacy

SSL checker says everything is fine.

Vultr ssl checker

I checked the certificate strength with SSL Labs (OK).

Vultr ssl labs

Test and Reload NGINX (Ubuntu)

sudo nginx -t
sudo nginx -s reload
sudo /etc/init.d/nginx restart

Create a test PHP file

<?php
phpinfo()
?>

It Works.

Install Utils (Ubuntu)

Install an interactive folder size program

sudo apt-get install ncdu
sudo ncdu /

Vultr ncdu

Install a better disk check utility

sudo apt-get install pydf
pydf

Vultr pydf

Display startup processes

sudo apt-get install rcconf
sudo rcconf

Install JSON helper

sudo apt-get install jq
# Download and display a json file with jq
curl 'https://api.github.com/repos/stedolan/jq/commits?per_page=5' | jq .

Increase the console history

HISTSIZE=10000
HISTCONTROL=ignoredups

I rebooted to see if PHP started up.

sudo reboot

OpenSSL Info (Ubuntu)

Read about updating OpenSSL here.

Update Ubuntu

sudo apt-get update
sudo apt-get dist-upgrade

Vultr Firewall

I configured the server firewall at Vultr and ensured it was setup by clicking my server, then settings then firewall.

Vultr firewall

I then checked open ports with https://mxtoolbox.com/SuperTool.aspx

Assign a Domain (Vultr)

I assigned a  domain with my VM at https://my.vultr.com/dns/add/

Vultr add domain

Charts

I reviewed the server information at Vultr (nice).

Vultr charts

Static IP’s

You should also setup a static IP in /etc/network/interfaces as mentioned in the settings for your server https://my.vultr.com/subs/netconfig/?SUBID=XXXXXX

Hello,

Thank you for contacting us.

Please try setting your OS's network interface configuration for static IP assignments in this case. The blue "network configuration examples" link on the "Settings" tab includes the necessary file paths and configurations. This configuration change can be made via the provided web console.

Setting your instance's IP to static will prevent any issues that your chosen OS might have with DHCP lease failure. Any instance with additional IPs or private networking enabled will require static addresses on all interfaces as well. 

--
xxxxx xxxxxx
Systems Administrator
Vultr LLC

Backup your existing Ubuntu 16.04 DHCP Network Configuratiion

cp /etc/network/interfaces /interfaces.bak

I would recommend you log a Vultr support ticket and get the right IPV4/IPV6 details to paste into /etc/network/interfaces while you can access your IP.

It is near impossible to configure the static IP when the server is refusing a DHCP IP address (happened top me after 2 months).

If you don’t have time to setup a  static IP you can roll with Auto DHCP IP assignment and when your server fails to get and IP you can manually run the following command (changing the network adapter too your network adapter) from the web root console.

dhclient -1 -v ens3 

I logged a ticket for each of my other servers to get thew contents or /etc/network/interfaces

Support Ticket Contents:

What should the contents of /etc/network/interfaces be for xx.xx.xx.xx (Ubuntu: 16.04, Static)

Q1) What do I need to add to the /etc/network/interfaces file to set a static IP for server www.myservername.com/xx.xx.xx.xx/SUBID=XXXXXX

The server's IPV4 IP is: XX.XX.XX.XX
The server's IPV6 IP is: xx:xx:xx:xx:xx:xx:xx:xx (Network: xx:xx:xx:xx::, CIRD: 64, Recursive DNS: None)

Install an FTP Server (Ubuntu)

I decided on pureftp-d based on this advice.  I did try vsftpd but it failed. I used this guide to setup FTP and a user.

I used this guide to setup an FTP and a user. I was able to login via FTP but decided to setup C9 instead. I stopped the FTP service.

Connected to my vultr domain with C9.io
I logged into and created a new remote SSH connection to my Vultr server and copied the ssh key and added to my Vultr authorized keys file
sudo nano authorized_keys

I opened the site with C9 and it setup my environment.

I do love C9.io

Vultr c9

Add an  SSL certificate (Reissue existing SSL cert at NameCheap)

I had a chat with Konstantin Detinich (SSL CS) on Namecheap’s chat and he helped me through reissuing my certificate.

I have a three-year certificate so I reissued it.  I will follow the Namecheap reissue guide here.

I recreated certificates

cd /etc/nginx/
mkdir ssl
cd ssl
sudo openssl req -newkey rsa:2048 -nodes -keyout mydomain_com.key -out mydomain_com.csr
cat mydomain_com.csr

I posted the CSR into Name Cheap Reissue Certificate Form.

Vultr ssl cert

Tip: Make sure your certificate is using the same name and the old certificate.

I continued the Namecheap prompts and specified HTTP domain control verification.

Namecheap Output: When you submit your info for this step, the activation process will begin and your SSL certificate will be available from your Domain List. To finalize the activation, you’ll need to complete the Domain Control Validation process. Please follow the instructions below.

Now I will wait for the verification instructions.

Update: I waited a  few hours and the instructions never came so I logged in to the NameCheap portal and downloaded the HTTP domain verification file. and uploaded it to my domain.

Vultr ssl cert 2

I forgot to add the text file to the NGINX allowed files in files list.

I added the following file:  /etc/nginx/sites-available/default

index index.php index.html index.htm 53guidremovedE5.txt;

I reloaded and restarted NGINX

sudo nginx -t
nginx -s reload
sudo /etc/init.d/nginx restart

The file now loaded over port 80. I then checked Namecheap chat (Alexandra Belyaninova) to speed up the HTTP Domain verification and they said the text file needs to be placed in /.well-known/pki-validation/ folder (not specified in the earlier steps).

http://mydomain.com/.well-known/pki-validation/53gudremovedE5.txt and http://www.mydoamin.com/.well-known/pki-validation/53guidremovedE5.txt

The certificate reissue was all approved and available for download.

Comodo

I uploaded all files related to the ssl cert to /etc/nginx/ssl/ and read my guide here to refresh myself on what is next.

I ran this command in the folder /etc/nginx/ssl/ to generate a DH prime rather than downloading a nice new one from here.

openssl dhparam -out dhparams4096.pem 4096

This namecheap guide will tell you how to activate a new certificate and how to generate a CSR file. Note: The guide to the left will generate a 2048 bit key and this will cap you SSL certificates security to a B at http://www.sslabs.com/ssltest so I recommend you generate an 4096 bit csr key and 4096 bit Diffie Hellmann key.

I used https://certificatechain.io/ to generate a valid certificate chain.

My SSL /etc/nginx/ssl/sites-available/default config

proxy_cache_path /tmp/nginx-cache keys_zone=one:10m;

server {
	listen 80 default_server;
	listen [::]:80 default_server;

        error_log /www-error-log.txt;
        access_log /www-access-log.txt;
	
	listen 443 ssl;

	limit_conn conn_limit_per_ip 10;
        limit_req zone=req_limit_per_ip burst=10 nodelay;

	root /www;
        index index.php index.html index.htm;

	server_name www.thedomain.com thedomain.com localhost;

        # ssl on This causes to manuy http redirects
        ssl_certificate /etc/nginx/ssl/trust-chain.crt;
        ssl_certificate_key /etc/nginx/ssl/thedomain_com.key;
        ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";              # disable some old ciphers
        ssl_prefer_server_ciphers on;
        ssl_dhparam /etc/nginx/ssl/dhparams4096.pem;
        ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        server_tokens off;
        ssl_session_cache shared:SSL:40m;                                           # More info: http://nginx.com/blog/improve-seo-https-nginx/
        
        # Set SSL caching and storage/timeout values:
        ssl_session_timeout 4h;
        ssl_session_tickets off; # Requires nginx >= 1.5.9
        
        # OCSP (Online Certificate Status Protocol) is a protocol for checking if a SSL certificate has been revoked
        ssl_stapling on; # Requires nginx >= 1.3.7
        ssl_stapling_verify on; # Requires nginx => 1.3.7
        add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";

	add_header X-Frame-Options DENY;                                            # Prevent Clickjacking
 
        # Prevent MIME Sniffing
        add_header X-Content-Type-Options nosniff;
  
        # Use Google DNS
        resolver 8.8.8.8 8.8.4.4 valid=300s;
        resolver_timeout 1m;
 
        # This is handled with the header above.
        # rewrite ^/(.*) https://thedomain.com/$1 permanent;

	location / {
                try_files $uri $uri/ =404;
                index index.php index.html index.htm;
                proxy_set_header Proxy "";
        }
 
        fastcgi_param PHP_VALUE "memory_limit = 1024M";

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        location ~ \.php$ {
                try_files $uri =404;
 
                # include snippets/fastcgi-php.conf;
 
                fastcgi_split_path_info ^(.+\.php)(/.+)$;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
                fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;
 
                # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
                # With php5-cgi alone:
                # fastcgi_pass 127.0.0.1:9000;
        }
 
        # deny access to .htaccess files, if Apache's document root
        location ~ /\.ht {
               deny all;
        }
	
}

My /etc/nginx/nginx.conf Config

# https://github.com/denji/nginx-tuning
user www-data;
worker_processes auto;
worker_cpu_affinity auto;
pid /run/nginx.pid;
worker_rlimit_nofile 100000;
error_log /var/log/nginx/nginxcriterror.log crit;

events {
	worker_connections 4000;
	use epoll;
	multi_accept on;
}

http {

        limit_conn conn_limit_per_ip 10;
        limit_req zone=req_limit_per_ip burst=10 nodelay;

        # copies data between one FD and other from within the kernel faster then read() + write()
        sendfile on;

        # send headers in one peace, its better then sending them one by one
        tcp_nopush on;

        # don't buffer data sent, good for small data bursts in real time
        tcp_nodelay on;

        # reduce the data that needs to be sent over network -- for testing environment
        gzip on;
        gzip_min_length 10240;
        gzip_proxied expired no-cache no-store private auth;
        gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/json application/xml;
        gzip_disable msie6;

        # allow the server to close connection on non responding client, this will free up memory
        reset_timedout_connection on;

        # if client stop responding, free up memory -- default 60
        send_timeout 2;

        # server will close connection after this time -- default 75
        keepalive_timeout 30;

        # number of requests client can make over keep-alive -- for testing environment
        keepalive_requests 100000;

        # Security
        server_tokens off;

        # limit the number of connections per single IP 
        limit_conn_zone $binary_remote_addr zone=conn_limit_per_ip:10m;

        # limit the number of requests for a given session
        limit_req_zone $binary_remote_addr zone=req_limit_per_ip:10m rate=5r/s;

        # if the request body size is more than the buffer size, then the entire (or partial) request body is written into a temporary file
        client_body_buffer_size  128k;

        # headerbuffer size for the request header from client -- for testing environment
        client_header_buffer_size 3m;

        # to boost I/O on HDD we can disable access logs
        access_log off;

        # cache informations about FDs, frequently accessed files
        # can boost performance, but you need to test those values
        open_file_cache max=200000 inactive=20s; 
        open_file_cache_valid 30s; 
        open_file_cache_min_uses 2;
        open_file_cache_errors on;

        # maximum number and size of buffers for large headers to read from client request
        large_client_header_buffers 4 256k;

        # read timeout for the request body from client -- for testing environment
        client_body_timeout   3m;

        # how long to wait for the client to send a request header -- for testing environment
        client_header_timeout 3m;
	types_hash_max_size 2048;
	# server_tokens off;
	# server_names_hash_bucket_size 64;
	# server_name_in_redirect off;

	include /etc/nginx/mime.types;
	default_type application/octet-stream;

	ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
	ssl_prefer_server_ciphers on;

	access_log /var/log/nginx/access.log;
	error_log /var/log/nginx/error.log;

	
	# gzip_vary on;
	# gzip_proxied any;
	# gzip_comp_level 6;
	# gzip_buffers 16 8k;
	# gzip_http_version 1.1;
	# gzip_types text/plain text/css application/json application/javascript text/xml application/xml application/xml+rss text/javascript;

	include /etc/nginx/conf.d/*.conf;
	include /etc/nginx/sites-enabled/*;
}


#mail {
#	# See sample authentication script at:
#	# http://wiki.nginx.org/ImapAuthenticateWithApachePhpScript
# 
#	# auth_http localhost/auth.php;
#	# pop3_capabilities "TOP" "USER";
#	# imap_capabilities "IMAP4rev1" "UIDPLUS";
# 
#	server {
#		listen     localhost:110;
#		protocol   pop3;
#		proxy      on;
#	}
# 
#	server {
#		listen     localhost:143;
#		protocol   imap;
#		proxy      on;
#	}
#}

Namecheap support checked my certificate with https://decoder.link/sslchecker/ (no errors). Other SSL checkers are https://certlogik.com/ssl-checker/ and https://sslanalyzer.comodoca.com/

I was given a new certificate to try by Namecheap.

Namecheap Chat (Dmitriy) also recommended I clear my google cache as they did not see errors on their side (this worked).

SSL Security

Read my past guide on adding SSL to a Digital Ocean server.

I am checking my site with https://www.ssllabs.com/ssltest/ (OK).

My site came up clean with shodan.io

Securing Ubuntu in the Cloud

Read my guide here.

OpenSSL Version

I checked the OpenSLL version to see if it was up to date

openssl version
OpenSSL 1.1.0f  25 May 2017

Yep, all up to date https://www.openssl.org/

I will check often.

Install MySQL GUI

Installed the Adminer MySQL GUI tool (uploaded)

Don’t forget to check your servers IP with www.shodan.io to ensure there are no back doors.

I had to increase PHP’supload_max_filesize file size temporarily to allow me to restore a database backup.  I edited the php file in /etc/php/7.0/fmp/php.ini and then reload php

sudo service php7.0-fpm restart

I used Adminer to restore a database.

Support

I found the email support to Vultr was great, I had an email reply in minutes. The Namecheap chat was awesome too. I did have an unplanned reboot on a Vultr node that one of my servers were on (let’s hope the server survives).

View the Vultr service status page is located here.

Conclusion

I now have a secure server with MySQL and other web resources ready to go.  I will not add some remote monitoring and restore a website along with NodeJS and MongoDB.

site ready

Definitely, give Vulrt go (they even have data centers in Sydney). Signup with this link http://www.vultr.com/?ref=7192231

Namecheap is great for certificates and support.

ssl labs

Vultr API

Vultr has a great API that you can use to automate status pages or obtain information about your VM instances.

API Location: https://www.vultr.com/api/

First, you will need to activate API access and allow your IP addresses (IPV4 and IPV6) in Vultr. At first, I only allowed IPV4 addresses but it looks as if Vultr use IPV6 internally so add your IPV6 IP (if you are hitting the IP form, a Vultr server). Beware that the return JSON from the https://api.vultr.com/v1/server/list API has URLs (and tokens) to your virtual console and root passwords so ensure your API key is secured.

Here is some working PHP code to query the API

<?php

$ch = curl_init();
$headers = [
     'API-Key: removed'
];
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_URL, 'https://api.vultr.com/v1/server/list');

$server_output = curl_exec ($ch);
curl_close ($ch);
print  $server_output ;
curl_close($ch);
     
echo json_decode($server_output);
?>

Your server will need to curl installed and you will need to enable URL opening in your php.ini file.

allow_url_fopen = On

Once you have curl (and the API) working via PHP, this code will return data from the API for a nominated server (replace ‘123456’ with the id from your server at https://my.vultr.com/).

$ch = curl_init();
$headers = [
'API-Key: removed'
];
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
curl_setopt($ch, CURLOPT_URL, 'https://api.vultr.com/v1/server/list');

$server_output = curl_exec ($ch);
curl_close ($ch);
//print  $server_output ;
curl_close($ch);

$array = json_decode($server_output, true);

// # Replace 1234546 with the ID from your server at https://my.vultr.com/

//Get Server Location
$vultr_location = $array['123456']['location'];
echo "Location: $vultr_location <br/>";

//Get Server CPU Count
$vultr_cpu = $array['123456']['vcpu_count'];
echo "CPUs: $vultr_cpu <br/>";

//Get Server OS
$vultr_os = $array['123456']['os'];
echo "OS: $vultr_os<br />";

//Get Server RAM
$vultr_ram = $array['123456']['ram'];
echo "Ram: $vultr_ram<br />";

//Get Server Disk
$vultr_disk = $array['123456']['disk'];
echo "Disk: $vultr_disk<br />";

//Get Server Allowed Bnadwidth
$vultr_bandwidth_allowed = $array['123456']['allowed_bandwidth_gb'];

//Get Server Used Bnadwidth
$vultr_bandwidth_used = $array['123456']['current_bandwidth_gb'];

echo "Bandwidth: $vultr_bandwidth_used MB of $vultr_bandwidth_allowed MB<br />";

//Get Server Power Stataus
$vultr_power = $array['123456']['power_status'];
echo "Power State: $vultr_power<br />";

 //Get Server State
$vultr_state = $array['123456']['server_state'];
echo "Server State: $vultr_state<br />";

A raw packet looks like this from https://api.vultr.com/v1/server/list

HTTP/1.1 200 OK
Server: nginx
Date: Sun, 30 Jul 2017 12:02:34 GMT
Content-Type: application/json
Transfer-Encoding: chunked
Connection: close
X-User: [email protected]
Expires: Sun, 30 Jul 2017 12:02:33 GMT
Cache-Control: no-cache
X-Frame-Options: DENY
Strict-Transport-Security: max-age=31536000
X-Content-Type-Options: nosniff

{"123456":{"SUBID":"123456","os":"Ubuntu 16.04 x64","ram":"4096 MB","disk":"Virtual 60 GB","main_ip":"###.###.###.###","vcpu_count":"2","location":"Sydney","DCID":"##","default_password":"removed","date_created":"2017-01-01 09:00:00","pending_charges":"0.01","status":"active","cost_per_month":"20.00","current_bandwidth_gb":0.001,"allowed_bandwidth_gb":"3000","netmask_v4":"255.255.254.0","gateway_v4":"###.###.###.#,"power_status":"running","server_state":"ok","VPSPLANID":"###","v6_main_ip":"####:####:####:###:####:####:####:####","v6_network_size":"##","v6_network":"####:####:####:###:","v6_networks":[{"v6_main_ip":"####:####:####:###:####:####::####","v6_network_size":"##","v6_network":"####:####:####:###::"}],"label":"####","internal_ip":"###.###.###.##","kvm_url":"removed","auto_backups":"no","tag":"Server01","OSID":"###","APPID":"#","FIREWALLGROUPID":"########"}}

I recommend the Paw software for any API testing locally on OSX.

Bonus: Converting Vultr Network totals from the Vultr API with PHP

Add the following as a global PHP function in your PHP file. Found the number formatting solution here.

<?php
// Found at https://stackoverflow.com/questions/2510434/format-bytes-to-kilobytes-megabytes-gigabytes 

function swissConverter($value, $format = true, $precision = 2) {
    // Below converts value into bytes depending on input (specify mb, for 
    // example)
    $bytes = preg_replace_callback('/^\s*(\d+)\s*(?:([kmgt]?)b?)?\s*$/i', 
    function ($m) {
        switch (strtolower($m[2])) {
          case 't': $m[1] *= 1024;
          case 'g': $m[1] *= 1024;
          case 'm': $m[1] *= 1024;
          case 'k': $m[1] *= 1024;
        }
        return $m[1];
        }, $value);
    if(is_numeric($bytes)) {
        if($format === true) {
            //Below converts bytes into proper formatting (human readable 
            //basically)
            $base = log($bytes, 1024);
            $suffixes = array('', 'KB', 'MB', 'GB', 'TB');   

            return round(pow(1024, $base - floor($base)), $precision) .' '. 
                     $suffixes[floor($base)];
        } else {
            return $bytes;
        }
    } else {
        return NULL; //Change to prefered response
    }
}
?>

Now you can query the https://api.vultr.com/v1/server/bandwidth?SUBID=123456 API and get bandwidth information related to your server (replace 123456 with your servers ID).

<h4>Network Stats:</h4><br />
<?php

$ch = curl_init();
$headers = [
    'API-Key: removed'
];
curl_setopt($ch, CURLOPT_HTTPHEADER, $headers);
curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, false);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);

// Change 123456 to your server ID

curl_setopt($ch, CURLOPT_URL, 'https://api.vultr.com/v1/server/bandwidth?SUBID=123456');

$server_output = curl_exec ($ch);
curl_close ($ch);
//print  $server_output ;
curl_close($ch);

$array = json_decode($server_output, true);

//Get 123456 Incoming Bytes Yesterday
$vultr123456_imcoming00ib = $array['incoming_bytes'][0][1];
echo " &nbsp; &nbsp; Incoming Data Total Day Before Yesterday: <strong>" . swissConverter($vultr123456_imcoming00ib, true) . "</strong><br/>";

//Get 123456 Incoming Bytes Yesterday
$vultr123456_imcoming00ib = $array['incoming_bytes'][1][1];
echo " &nbsp; &nbsp; Incoming Data Total Yesterday: <strong>" . swissConverter($vultr123456_imcoming00ib, true) . "</strong><br/>";

//Get 123456 Incoming Bytes Today
$vultr123456_imcoming00ib = $array['incoming_bytes'][2][1];
echo " &nbsp; &nbsp; Incoming Data Total Today: <strong>" . swissConverter($vultr123456_imcoming00ib, true) . "</strong><br/><br/>";

//Get 123456 Outgoing Bytes Day Before Yesterday 
$vultr123456_imcoming10ob = $array['outgoing_bytes'][0][1];
echo " &nbsp; &nbsp; Outgoing Data Total Yesterday: <strong>" . swissConverter($vultr123456_imcoming10ob, true) . "</strong><br/>";

//Get 123456 Outgoing Bytes Yesterday 
$vultr123456_imcoming10ob = $array['outgoing_bytes'][1][1];
echo " &nbsp; &nbsp; Outgoing Data Total Yesterday: <strong>" . swissConverter($vultr123456_imcoming10ob, true) . "</strong><br/>";

//Get 123456 Outgoing Bytes Today 
$vultr123456_imcoming00ob = $array['outgoing_bytes'][2][1];
echo " &nbsp; &nbsp; Outgoing Data Total Today: <strong>" . swissConverter($vultr123456_imcoming00ob, true) . "</strong><br/>";

echo "<br />";
?>

Bonus: Pinging a Vultr server from the Vultr API with PHP’s fsockopen function

Paste the ping function globally

<?php
function pingfsockopen($host,$port=443,$timeout=3)
{
        $fsock = fsockopen($host, $port, $errno, $errstr, $timeout);
        if ( ! $fsock )
        {
                return FALSE;
        }
        else
        {
                return TRUE;
        }
}
?>

Now you can grab the servers IP from https://api.vultr.com/v1/server/list and then ping it (on SSL port 443).

//Get Server 123456 IP
$vultr_mainip = $array['123456']['main_ip'];
$up = pingfsockopen($vultr_mainip);
if( $up ) {
        echo " &nbsp; &nbsp; Server is UP.<br />";
}
else {
        echo " &nbsp; &nbsp; Server is DOWN<br />";
}

Setup Google DNS

sudo nano /etc/network/interfaces

Add line

dns-nameservers 8.8.8.8 8.8.4.4

What have I missed?

Read my blog post on Securing an Ubuntu VM with a free LetsEncrypt SSL certificate in 1 Minute.

Read my blog post on securing your Ubuntu server in the cloud.

Read my blog post on running an Ubuntu system scan with Lynis.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.993 added log info

Filed Under: Cloud, Development, DNS, Hosting, MySQL, NodeJS, OS, Server, ssl, Ubuntu, VM Tagged With: server, ubuntu, vultr

How to develop software ideas

July 9, 2017 by Simon

I was recently at a public talk by Alan Jones at the UNE Smart Region Incubator where Alan talked about launching startups and developing ideas.

Alan put it quite eloquently that “With change comes opportunity” and we are all very capable of building the next best thing as technological barriers and costs are a lot lower than 5 years ago but Alan also mentioned 19 start-ups-ups fail but “if you focus on solving customer problems you have a better chance of succeeding”. Regions need to share knowledge and you can learn from other peoples mistakes.”

I was asked after this event to share thoughts on “how do I learn to develop an app” and “how do you get the knowledge”. Here is my poor “brain dump” on how to develop software ideas (It’s hard to condense 30 years experience developing software). I will revise this post over the coming weeks so check back often.

If you have never programmed before check out this programming 101 guides here.

I have blogged on technology/knowledge things in the past at www.fearby.com and recently I blogged about how to develop cloud-based services (here, here, here, here and here) but this blog post assumes you have a validated “app idea” and you want to know how to develop yourself. If you do not want to develop an app yourself you may want to speak with Blue Chilli.

Find a good mentor.


True App Development Quotes

  • Finding development information is easy, following a plan is hard.
  • Aim for progress and not perfection.
  • Learn one thing at a time (Multitasking can kill your brain).
  • Fail fast and fail early and get feedback as early as possible from customers.
  • 10 engaged customers are better than 10,000 disengaged users.

And a bit of humour before we start.

Project Mangement Lol

(click for larger image)

Here is a funny video on startup/entrepreneur life/lingo


This is a good funny, open and honest video about programming on YouTube.

Follow Seth F Samuel on twitter here.

Don’t be afraid to learn from others before you develop

My fav tips from over 200 failed startups (from https://www.cbinsights.com/blog/startup-failure-post-mortem/ )

  • Simpler websites shouldn’t take more than 2-3 months.You can always iterate and extrapolate later. Wet your feet asap
  • As products became more and more complex, the performance degrades. Speed is a feature for all web apps. You can spend hundreds of hours trying to speed of the app with little success. Benchmarking tools incorporated into the development cycle from the beginning is a good idea
  • Outsource or buy in talent if you don’t know something (e.g marketing). Time is money.
  • Make an environment where you will be productive. Working from home can be convenient, but often times will be much less productive than a separate space. Also it’s a good idea to have separate spaces so you’ll have some work/life balance.
  • Not giving enough time to stress and load testing or leaving it until the last minute is something startups are known for — especially true of small teams — but it means things tend to get pretty tricky at scale, particularly if you start adding a user every four seconds.
  • It’s possible to make a little money from a lot of people, or a lot of money from a few people. Making a little money from a few people doesn’t add up. If you’re not selling something, you better have a LOT of eyeballs. We didn’t.
  • We received conflicting advice from lots of smart people about which is more important. We focused on engagement, which we improved by orders of magnitude. No one cared. Lesson learned: Growth is the only thing that matters if you are building a social network. Period. Engagement is great but you aren’t even going to get the meeting unless your top-line numbers reach a certain threshold (which is different for seed vs. series A vs. selling advertising).
  • We most definitely committed the all-too-common sin of premature scaling. Driven by the desire to hit significant numbers to prove the road for future fundraising and encouraged by our great initial traction in the student market, we embarked on significant work developing paid marketing channels and distribution channels that we could use to demonstrate scalable customer acquisition. This all fell flat due to our lack of product/market fit in the new markets, distracted significantly from product work to fix the fit (double fail) and cost a whole bunch of our runway.
  • If you’re bootstrapping, cash flow is king. If you want to possibly build a product while your revenue is coming from other sources, you have to get those sources stable before you can focus on the product.
  • Don’t multiply big numbers. Multiply $30 times 1.000 clients times 24 months. WOW, we will be rich! Oh, silly you, you have no idea how hard it is to get 1.000 clients paying anything monthly for 24 months. Here is my advice: get your first client. Then get your first 10. Then get more and more. Until you have your first 10 clients, you have proved nothing, only that you can multiply numbers.
  • Customers pay for information, not raw data. Customers are willing to pay a lot more for information and most are not interested in data. Your service should make your customers look intelligent in front of their stakeholders. Follow up with inactive users. This is especially true when your service does not give intermediate values to your users. Our system should have been smarter about checking up on our users at various stages.
  • Do not launch a startup if you do not have enough funding for multiple iterations. The chances of getting it right the first time are about the equivalent of winning the lotto.

Here are my tips on staying on track developing apps. What is the difference between a website, app, API, web app, hybrid app and software (my blog post here)?

I have seen quite a few projects fail because:

  • The wrong technology was mandated.
  • The software was not documented (by the developers).
  • The software was shelved because new developers hated it or did not want to support it.

Project Roles (hats)

It is important to understand the roles in a project (project management methodology aside) and know when you are being a “decision maker” or a “technical developer”. A project usually has these roles.

  • Sponsor/owner (usually fund the project and have the final say).
  • Executive/Team leader/scrum master (manage day to day operations, people, tasks and resources).
  • Team members (UI, UX, Marketers, Developers (DevOps, Web, Design etc) are usually the doers.
  • Stakeholders (people who are impacted (operations, owners, Helpdesk)).
  • Subject Matter Experts (people who should guide the work and not be ignored).
  • Testers (people who test the product and give feedback).

It can be hard as a developer to switch hats in a one-person team.

How do you develop and gain knowledge?

First, document what you need to develop (what problem are you solving and what value will your idea bring). Does this solution exist already? Don’t solve a problem that already exists.

Developing software is not hard, you just need to be logical, research, be patient and follow a plan. The hardest part can be gluing components together.

I like to think of developing software like making a car if you need 4 wheels do you have 4 wheels? If you want to build it yourself and save some money can you make wheels (make rubber strips with steel reinforced/vulcanized rubber, make alloys and add bearings and have them pass regulations) or should you buy wheels (some things are cheaper to make than other things)? Developing software can be easy if you know what your are doing and have the experience and are aware of the costs and risks.  Developing software can lead you down a rabbit hole of endless research, development, and testing if you don’t know what you are doing.

Examples 1:

I “need a webpage”:

  • Research: Will Wix, Shopify or a hosted WordPress website do (is it flexible or cheap enough) or do I install WordPress (guide here) or do I  learn and build an HTML website and buy a theme and modify it (and have a custom/flexible solution)?

Example 2:

I “need an iPhone and Android app”:

Research: You will need to learn iOS and Android programming and you may need a server or two to hold the apps data, webpage and API. You will also need to set up and secure the servers or choose to install a database or go with a “database as a service” like cloud.mongodb.com or google firebase.

Money can buy anything (but will it be flexible/cheap enough), time can build anything (but will it be secure enough).

Developing software can be easy if you know what your are doing and have the experience and are aware of the costs and risks but developing software can lead you down a rabbit hole of endless research, development and testing if you don’t know what you are doing.

Almost all systems will need a central database to store all data, you can choose a traditional relational SQL database or a newer NoSQL database. MySQL is a good/cheap relational SQL database and MongoDB is a good NoSQL database. You will need to decide on how your app talks to the database (directly or via an API (protected by OAuth or limited access tokens)).  It is a bad idea to open a database directly to the world with no security. Sites like www.shodan.io will automatically scan the Internet looking for open databases or systems and report this as an insecure site to anyone. It is in your interest to develop secure systems in all stages of development.

CRUD (Create, Read, Update and Delete) is a common group of database tasks that you can do to prove you can read, write, update and delete from a database. While performing CRUD operations is a good to benchmark to also see how fast the database it.  if a database is the slowest link then you can use memory to cache database values (read my guide here). Caching can turn a cheap server into a faster server. Learning by doing can quickly build skills so “research”, “do” and “learn”.

Most solutions will need a website (and a web server). Here is a good article comparing Apache and Nginx (the leading open source web servers).

Stacks and Technology – There are loads of development environments (stacks), frameworks and technologies that you can choose. Frameworks supposedly make things easier and faster but frameworks and technologies change (See 2016 frameworks to learn guide and 2017 frameworks to learn guide) frequently (and can be abandoned). Frameworks supposedly make things easier and faster but be careful most frameworks run 30% slower than raw server-side and client code. I’d recommend you learn a few technologies like NGINX, NodeJS, PHP and MySQL and move up from there.

The Mean Stack is a  popular web development platform (MEAN = MongoDB, ExpressJS, Angular and NodeJS.).

Apps can be developed for Apple platforms by signing up here (about $150 AUD a year) and using the XCode IDE. Apps can be developed for the Android Platform by using Android Studio (for about $20 (one-off fee)). Microsoft has a developer portal for the Windows Platform. Google also has an online scalable database as a service called Firebase. If you look hard enough you will find a service for everything but connecting those services can be timely, costly or make security and a scalable solution impossible so beware of using as-a-service platforms. I used the Corona SDK to develop an app but abandoned the platform due to changes in the vendor’s communication and enforced policies.

If you are not sure don’t be afraid of ask for help on Twitter.

Twitter is awesome for finding experts

Recent twitter replies to a problem I had.

Learning about new Technology and Stacks

To build the knowledge you need to learn stuff, build stuff, test (benchmark), get feedback and build more stuff. I like to learn about new technology and stacks by watching Udemy courses and they have a huge list of development courses (Web Development, Mobile Apps, Programming Languages, Game Development, Databases,  Software Testing,  Software Engineering etc).

I am currently watching a Practical iOS 11 course by Stephen DeStefano on Udemy to learn about unreleased/upcoming features on the Apple iPhone (learning about XCode 9, Swift 4, What’s new in iOS 11, Drag and drop, PDF and ARKit etc).

Udemy is awesome (Udemy often have courses for $15).

If you want to learn HTML go to https://www.w3schools.com/.

https://devslopes.com/have a number or development related courses and an active community of developers in a chat system.

You can also do formal study via an education provider (e.g. Bachelor of computer sciences at UNE or Certificate IV in programming or Diploma in Software Development at TAFE).

I would recommend you use Twitter and follow keywords (hashtags) around key topics (e.g #www, #css, #sql, #nosql, #nginx, #mongodb, #ios, #apple, #android, #swift, #objectivec, #java, #kotlin) and identify users to follow. Twitter is great for picking up new information.

I follow the following developers on YouTube (TheSwiftGuy, AppleProgrammer, AwesomeTuts, LetsBuildThatApp, CodingTech etc)

Companies like https://www.civo.com/ offer developer-friendly features with hosting, https://www.pebbled.io/ offer to develop for you and https://serverpilot.io/ help you spin up software on hosting providers.

What To Develop

First, you need to break down what you need. (e.g ” I want an app for iOS and Android in 5 months that does XYZ. The app must be secure and be fast. Users must be able to register an account and update their profile”).

Choosing how high to ensure your development project scales depends on your peak expected/active concurrent users (ratio of paying and free users). You can develop your app to scale very high but this may cost more money initially, it can be bad to pay to ensure scalability early. As long as you have a good product and robust networking/retry routines and UI you don’t need to scale high early.

Once you know what you need you can search the open-source community for code that you can use. I use Alamofire for iOS network requests, SwiftyJSON for processing JSON data and other open-source software. The only downside of using open source software is it may be abandoned by the creators and break in the future. Saving your time early may cost you time later.

Then you can break down what you don’t want. (e.g “I don’t want a web app or a windows phone or windows desktop app”). From here you will have a list of what you need and what you can avoid.

You will also need to choose a project management methodology (I have blogged about this here). Having a list of action item’s and a plan and you can work through developing your app.

While you are researching it is a good idea to develop smaller fun projects to refine your skills.  There are a number of System Development Life Cycles (SDLC’s) but don’t worry if you get stuck, seek advice or move on. It is a  good idea to get users beta testing your app early and seek feedback. Apple has the TestFlight app where you can send beta versions of apps to best testers. Here is a good guide on Android beta testing.

If you are unsure about certain user interface options or features divide your beta testers and perform A/B or split testing to determine the most popular user interfaces. Capturing user data and logs can also help with debugging and user usage actions.

Practice

Develop smaller proof of concept apps in new technologies or frameworks and you will build your knowledge and uncover limitations in certain frameworks and how to move forward with confidence. It is advisable to save your source code for later use and to share with others.

I have shared quite a bit of code at https://simon.fearby.com/blog/ that I refer to from time to time. I should have shared this on GitHub but I know Google will find this if people want it.

Get as much feedback as you can on what you do and choose (don’t trust the first blog post you read (me included)).

Most companies offer Webinars on their products. I like the NGINX webinars. Tutorialspoint have courses on development topics. Sitepoint is a  good development site that offers free books, courses, and articles. What are API’s information by Programmable web.

You may want to document your application flow to better understand how the user interface works.

Useful Tools

Balsamic Mockups and Blueprint are handy for mocking up applications.

C9.io is a great web-based IDE that can connect to a VM on AWS or Digital Ocean.  I have a guide here on connecting Cloud 9 to an AWS VM here.

I use the Sublime Text 3 text editor when editing websites locally.

(image courtesy of https://www.sublimetext.com/ )

I use the Mac Paw app to help test API’s I develop locally.

(image courtesy of https://paw.cloud )

Snippets is a great application for the Mac for storing code snippets.

I use the Cornerstone Subversion app for backing up my code on my Mac.

Webservers: https://www.iis.net/IIS Webserver, NGINX Webserver, Apache Webserver.

NodeJS programming manual and tutorials.

I use Little Snitch (guide here) for simulating network down in app development.

I use the Forklift file manager on OSX.

Databases: SQL tutorials, NoSQL Tutorials, MySQL documentation.

Siege is a command-line HTTP load testing tool.

CPU Busy

http://loader.io/ is a nice web-based benchmarking tool.

Bootstrap is an essential mobile responsive framework.

Atlassian Jira is an essential project tracking tool. More on Agile Epics v Stories v Tasks on the Atlassian community website here. I have a post on developing software and staying on track here using Jira.

Jsfiddle is a good site that allows you to share code you are working on or having trouble with.

Dribbble is a “show and tell” site for designers and creatives.

Stackoverflow is the go-to place to ask for help.

Things I care about during development phases.

  • Scalability
  • Flexibility
  • Risk
  • Cost
  • Speed

Concentrating too much on one facet can risk exposing other facets. Good programmers can recommend a deliver a solution that can be strong in all areas ( I hate developing apps that are slow but secure or scalable and complex).

Platforms

You can signup for online servers like Azure, AWS (my guide here) or you can use a cheaper CPanel based hosting. Read my guide on the costs of running a cloud-based service.

Use my link to get a free Digital Ocean server for two months by using this link. Read my blog post here to help setup you VM. You can always use Ubuntu on your local machine to use Ubuntu (read my guide here). Don’t forget to use a GIT code repository like GitHub or Bitbucket.

Locally you can install Ubuntu (developers edition) and have a similar environment as cloud platforms.

Lessons Learned

  • Deploy servers close to the customers (Digital Ocean is too far away to scale in Australia).
  • Accessibility and testing (make things accessible from the start).
  • Backup regularly (Use GIT, backup your server and use Rsync to copy files to remote servers and use services like backblaze.com to backup your machine).
  • Transportability of technology (Use open technology and don’t lock yours into one platform or service).
  • Cost (expensive and convenient solutions may be costly).
  • Buy in themes and solutions (wrapbootstrap.com).
  • Do improve what you have done (make things better over time). Thing progress and not perfection.

There is no shortage of online comments bagging certain frameworks or platforms so look for trends and success stories and don’t go with the first framework you find. Try candidate frameworks and services and make up your own mind.

A good plan, violently executed now, is better than a perfect plan next week. – General George S. Patton

Costs

Sometimes cost is not the deciding factor (read my blog post on Alibaba cloud). You should estimate your apps costs per 1000 users. What do light v heavy users cost you? I have a blog post on the approx cost of cloud services.  I started researching a scalable NoSQL platform on IBM Cloudant and it was going to cost $4,000 USD a month and integrating my own App logic and security was hard. I ended up testing MongoDB Cloud where I can scale to three servers for $80 a month but for now, I am developing my current project on my own AWS server with MongoDB instance. Read my blog post here on setting up MongoDB and read my blog post on the best MongoDB GUI.

Here is a great infographic for viewing what’s involved in mobile app development.

You can choose a number of tools or technologies to achieve your goals, for me it is doing it economically, securely and in a scalable way that has predictable costs. It is quite easy to develop something that is costly, won’t scale or not secure or flexible. Don’t get locked into expensive technologies. For example, AWS has a user pays Node JS service called Lambada where you get Million of free hits a month and then you get charged $0.0000002 per request thereafter. This sounds good but I prefer fixed pricing/DIY servers better as it allows me to build my own logic into apps (this is more important than scalability).

Using open-source software of off the shelf solutions may speed things up initially? Will It slow you down later though? Ensure free solutions are complete and supported and Ensure frameworks are helping. Do you need one server or multiple servers (guide on setting up a distributed MySQL environment )? You can read about my scalability on a budget journey here. You can speed up a server in two ways Scale Up (Add more Mhz or CPU cores) or scale-out (add more servers).

Start small and use free frameworks and platforms but have a tested scale-up plan, I researched cheap Digital Ocean servers and moved to AWS to improve latency and tested MongoDB on Digital Ocean and AWS but have a plan to scale up to cloud.mongodb.com if need be.

Outsource (contractors) 

Remember outsourcing work tasks (or complete outsourcing of development) can buy you time and or deliver software faster. Outsourcing can also introduce risks and be expensive. Ask for examples of previous work and get raw numbers on costs (now and in the future) and concurrent users that a particular bit of outsourcing work will achieve.

If you are looking to outsource work do look at work that the person or company has done before (if is fast, compliant, mobile scalable, secure, robust, backup up, do you have rights to edit/own and own the IP etc). I’d be cautious of companies who say they can do everything and don’t show live demos.

Also, beware of restrictions on your code set by the contractors. Can they do everything you need (compare with your list of Moscow must haves)? Sometimes contractors only code or do what they are comfortable with that can impact your deliverables.

Do use a private Git repository (that you own) like GitHub or BitBucket to secure your code and use software like Trello or Atlassian JIRA to track your project. Insist the contractors use your repository to retain control.

You can always sell equity in your idea to an investor and get feedback/development from companies like Bluechilli.

Monetization and data

Do have multiple monetization streams (initial app purchase cost, in-app purchase, subscriptions, in-app credit, advertising, selling code/components etc). Monthly revenue over yearly subscription works best to ensure cash flow.

Capture usage data and determine trends around successful engagement, Improve what works. Use A/B testing to roll out new features.

I like Backblaze post on getting your first 1,000 customers.

Maintenance, support risk and benefits

Building your own service can be cheaper but also riskier if you fail to secure an app you are in trouble if you cannot scale you are in trouble. If you don’t update your server when vulnerabilities come out you are in trouble. Also, Google on monetization strategies. Apple apps do appear to deliver more profits over Android. Developers often joke “Apple devices offer 90% of the profits and 10% of the problems and Android apps offer 90% of the problems and 10% of the profits”.

Also, Apple users tend to update to the latest operating system sooner where Android devices are rather fragmented.

Do inform you users with self-service status pages and informative error messages and don’t annoy users.

Use Free Trials and Credit

Most vendors have free trials so use them

https://aws.amazon.com/free/AWS have 12 month free tiers.

Use this link to get two months free with Digital Ocean.

Microsoft Azure also give away free credit.

Google cloud also have free credit.

Don’t be afraid to ask.

MongoDB Cloud also gives away free credit if you ask.

Security

Sites like Shodan.io will quickly reveal weaknesses in your server (and services), this will help you build robust solutions from the start before hackers find them. Read https://www.owasp.org/index.php/Main_Page to know h0w to develop secure websites. Listen to the SecurityNow podcast to learn how the technology works and is broken. Following TroyHunt is recommended to keep up to date with security in general. @0xDUDE is a good ethical hacker to follow to stay up-to date on security exploits also @GDI_FDN is a good non-profit organization that helps defend sites that use open source software.

White hack hackers exist but so do black hat ones.

Read the Open Web Application Security site here. Read my guide on setting up public key pinning in security certificates here.

I use the ASafaWeb site to test your sites from common ASP security flaws. If you have a secure certificate on your site you will need to ensure the certificate is secure and up to date with the SSL Labs SSL Test site.

SSL Cert

Once your websites IP address is known (get it from SSL Labs) run a scan over your site with https://www.shodan.io/ to find open ports or security weaknesses.

Shodan.io allows you and others to see public information about your server and services. You can read about well-known internet ports here.

Anyone can find your server if you are running older (or current) web servers and or services.

It is a  good idea to follow security researchers like Steve Gibson and Troy Hunt and stay up to date with live exploits. http://blog.talosintelligence.com is also a good site for reading technical breakdowns of exploits.

Networking

Do share and talk about what you do with other developers. You can learn a lot from other developers and this can save you loads of time and mistakes. True developers love talking about their code and solutions.

Decision Making

Quite a lot of time can be spent on deciding on what technology or platform to use, I decide by factoring in cost, risk and security over flexibility, support and scalability. If I need flexibility, lower support or scalability then I’ll choose a different technology/platform. Generally, technology can help with support. Scalable solutions need effort from start to finish (it is quite easy to slow down any technology or service).

Don’t be afraid to admit you have chosen the wrong technology or platform. It is far easier to research and move on than live with poor technology.

If you have chosen the wrong technology and stick with it, you (and others) will loath working with it (impacting productivity/velocity).  Do you spend time swapping technology or platforms now or be less productive later?

Intellectual property and Trademarks

Ensure you search international trademarks for your app terms before you start using them. The Australian ATO has a good Australian business name checker here.

https://namechk.com/ is also a good place to search for your app ideas name before you buy or register any social media accounts.

Using https://namechk.com/ you can see “mystartupidea” name is mostly free.

And the name “microsoft’ is mostly taken.

Seek advice from a start-up experts from https://www.bluechilli.com/ like Alan Jones.

See my guide on how to get useful feedback for your ideas here.

Tips

  1. Use Git Source Control systems like GitHub or Bitbucket from the start and offsite backup your server and environments frequently. Digital Ocean charges 20% of your servers costs to back it up. AWS has multiple backup offerings.
  2. Start small and scale up when needed.
  3. Do lots of research and test different platforms, frameworks, and technologies and you will know what you should choose to develop with.

(Image above found at http://startupquotes.startupvitamins.com/ Follow Startup Vitamins on Twitter here.).

You will know when you are a developer when you have gained knowledge and experience and can automatically avoid technologies that will not fit a  solution.

Share

Don’t be afraid to share what you know (read my blog post on this here). Sharing allows you to solidify your knowledge and get new information. Shane Bishop from EWWW Image Optimizer  WordPress plugin wrote Setting up a fast distributed MySQL environment with SSL for us. If you have something to share on here please let me know here on twitter.

It’s never too late to do

One final tip is knowledge is not everything, planning and research is key, a mind that can’t develop may be better than a mind that can because they have no experience (or baggage) and may find faster ways to do things. Thanks to http://zachvo.com/ for teaching me this during a recent WordPress re-deployment. Sometimes the simplest solution is.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

DRAFT: 1.86 added short link

Short: https://fearby.com/go2/develop/

Filed Under: Advice, Android, Apple, Atlassian, Backup, BitBucket, Blog, Business, Cloud, CoronaLabs, Cost, Development, Domain, Firewall, Free, Git, GitHub, Hosting, JIRA, mobile app, MySQL, Networking, NodeJS, OS, Project Management, Scalability, Scalable, Security, Server, Software, Status, Trello, VM Tagged With: ideas

How to upgrade an AWS free tier EC2 t2.micro instance to an on demand t2.medium server

July 9, 2017 by Simon

Amazon Web Services have a great free tier where you can develop with very little cost. The free tier Linux server is a t2.micro server (1 CPU, low to moderate IO, 1GB memory with 750 hours or CPU usage). The free tier limits are listed here. Before you upgrade can you optimize or cache content to limit usage?

When you are ready to upgrade resources you can use this cost calculator to set your region (I am in the Sydney region) and estimate your new costs.

awsupgrade001

You can also check out the ec2instances.info website for regional prices.

Current Server Usage

I have an NGINX Server with a NodeJS back powering an API that talks to a local MySQL database and Remote MongoDB Cluster ( on AWS via http://www.mongodb.com/cloud/ ). The MongoDB cluster was going to cost about $120 a month (too high for testing an app before launch).  The Free tier AWS instance is running below the 20% usage limit so this costs nothing (on the AWS free tier).

You can monitor your instance usage and credit in the Amazon Management Console, keep an eye on the CPU usage and CPU Credit and CPU Credit Balance.  If the CPU usage grows and balance drops you may need to upgrade your server to prevent usage charges.

AWS Upgrade

I prefer the optional htop command in Ubuntu to keep track of memory and processes usage.

Basic information from AWS t1.micro (idle)

AWS Upgrade

Older htop screenshot of a dual CPU VM being stressed.

CPU Busy

Future Requirements

I plan on moving a non-cluster MongoDB database onto an upgraded AWS free tier instance and develop and test there and when and if a scalable cluster is needed I can then move the database back to http://www.mongodb.com/cloud/.

First I will need to upgrade my Free tier EC2 instance to be able to install MongoDB 3.2 and power more local processing. Upgrading will also give me more processing power to run local scripts (instead of hosting them in Singapore on Digital Ocean).

How to upgrade a t2.micro to t2.medium

The t2.medium server has 2 CPU’s, 4 GB of memory and Low to Moderate IO.

  1. Backup your server in AWS (and manual backup).
  2. Shutdown the server with the command sudo shutdown now
  3. Login to your AWS Management Console and click Instances (note: Your instance will say it is running but it has shut down (test with SSH and connection will be refused)).
  4. Click Actions, Image then Create Image
  5. Name the image (select any volumes you have created) and click Create Image.
  6. You can follow the progress o the snapshot creation under the Snapshots menu.AWS Upgrade
  7. When the volumes have been snapshoted you can stop the instance.
  8. Now you can change the instance type by right clicking on the stopped instance and selecting Instance Settings then Change Instance Type  AWS Upgrade
  9. Choose the new desired instance type and click apply (double check your cost estimates) AWS Upgrade
  10. Your can now start your instance.
  11. After a few minutes, you can log in to your server and confirm the cores/memory have increased.AWS Upgrade
  12. Note: Your servers CPU credits will be reset and your server will start with lower CPU credits. Depending on your server you will receive credits every hour up to your maximum cap.AWS Upgrade
  13. Note: Don’t forget to configure your software to use more RAM or Cores if required.

Misc

Confirm processors via the command line:

grep ^processor /proc/cpuinfo | wc -l 
>2

My NGINX configuration changes.

worker_processes 2;
worker_cpu_affinity auto;
...
events {
	worker_connections 4096;
	...
}
...

Find the open file limit of your system (for NGINX worker_connections maximum)

ulimit -n

That’s all for now.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.0 DRAFT Initial Post

Filed Under: Advice, AWS, Cloud, Computer, Server, Ubuntu, VM Tagged With: AWS, instance, upgrade

Alibaba Cloud how good is it?

June 18, 2017 by Simon

I recently clicked an Alibaba Cloud add Banner on Facebook that said Alibaba Cloud offers a cloud server (with 40gb SSD, 1x CPU, 1GB Ram and 1TB traffic for $30 a year). I am familiar with Alibaba (Chinese AWS equivalent) and the owner Jack Ma (and his mindset). The thing that got me interested was hosting was available in Sydney (Something that Digital Ocean does not offer).  This hosting deal seems too good to be true. Digital Ocean has similar hosting plans but half the storage for $5 a month (but are not hosted in Australia). I asked Alibaba on Twitter (@alibaba_cloud) some questions last Sunday and got an answer on Friday evening over 5 days later.

If You sign up for Alibaba Cloud here you will get $300 of New User Free Credit to use in 60 days.

I deploy servers on the awesome Vultr platform, you can deploy a server for as low as $2.5 a month, read my guide for setting up a Vultr server here. I love the Vultr management interfaces and support, that’s why I use them. This server is on a Vultr server and it’s fast. .

Or deploy an SSD cloud server like Digital Ocean in 55 seconds. Sign up using my link and receive $10 in credit (2 free months): https://www.digitalocean.com

Read my other guides

    • Creating an AWS EC2 Ubuntu 14.04 server with NGINX, Node and MySQL and phpMyAdmin
    • How to buy a new domain and SSL cert from NameCheap, a Server from Digital Ocean and configure it.
    • The quickest way to setup a scalable development ide and web server
    • Adding a commercial SSL certificate to a Digital Ocean VM
    • Application scalability on a budget (my journey)

Alibaba Cloud

Alibaba seems to have similar products as Amazon Web Services.
alibaba001

After you click Create Free Account you will need to enter a credit or debit card.
alibaba002

You need to verify an email address and mobile number by verifying random tokens (that you will receive).
alibaba003

An email verify token will look like this.
alibaba004

When tour account is created you will be sent an email that states.

“Dear username,

Congratulations for successfully registering an Alibaba Cloud account! Your account ID is:[email protected] _____.___. Please remember to change your password on a regular basis and keep all account and password information secure.

Please remember to change your password on a regular basis and keep all account and password information secure.
Starting from today, you have 60 days to try and test Alibaba Cloud products with $300 New User Free Credit! The Free Credit will be automatically delivered to your account once you have successfully added a verified payment method. Follow this link to add your payment method and necessary information.”
alibaba005

You will need to add a billing address.
alibaba006

You will need to add a payment method.
alibaba007

Once the payment information is added you will need o to verify this. Alibaba says they will deposit a  verification code and a set amount into your bank account that you will need to obtain to verify your account (fair enough).

alibaba008

The problem is I had two deposits and two codes that both did not work and the format of the deposit descriptions was not as expected (I was expecting “INTL* #####” and received “######  INTL*”, I tried both of these codes and neither worked).
alibaba009 error - multiple

The codes would not work.
alibaba010

Error: “The verification code is incorrect. You have 4 attempts left to verify.”.

alibaba011_verify_bank_failed

After 4 attempts I was blocked.
alibaba012_verify_bank_failed

I now sought to report this problem by logging a support ticket? In the Alibaba support area, it appeared I had a “Basic” account that may not have support rights as I could see a “Purchase Support Plan” link?
alibaba013_trying_to_report_error

I found the create support ticket link and proceeded to attach the support related screenshots. Apparently tiff files (Mac default screenshots format are not allowed).
alibaba014_unsupported_attahment_format

I proceeded to convert my screenshots to png files and attach then.
alibaba015_reported_verification_error

The support ticket was logged.
alibaba016_waiting_for_response

I tried again to add and verify my payment method.
alibaba017_readded_payment_card

Again I received two invalid verification deposits and codes.
alibaba018_multiple_verify_tokens_again_and_none_at_012c

Alibaba billing has a few FAQ articles that may or may not hep me

Why have I received two charges to my account?

Alibaba sent both a micocharge and an authorization hold to your card during signup. The authorization hold verifies that your card details are valid and that you do not have a zero balance. The authorization hold is always 1.00 USD. The microcharge is used to verify that you are the owner of the card, and can be any amount less than 1.00 USD. Both charges will be returned to you within 24 hours.

What if my card statement is not in US Dollars?

Users outside of the United States may not be able see the original USD amount of the microcharge in their bank statement because they are billed in local currency. In this case, you may opt to use the accompanying 6-digit transaction code displayed in the transaction description in order to complete account verification.

Alas I am still stuck verifying my payment method.
alibaba019

Waiting for support

A full business day later and I am still waiting for feedback on the ticket.

alibaba021

Support Response

Support said I need to NOT enter the transaction ID and enter the amount instead.

alibaba022

I tried that with no luck 🙁  I have removed m payment details a will stick with Digital Ocean.

Alibaba

It appears Alibaba still have issues with multiple bank verification ID’s being sent. I have deleted my payment car from Alibaba and will stick with Digital Ocean and AWS.  I guess I can try and pay Alibaba for more money help, but no.

Alibaba Support >If you want higher level support, refer to Alibaba Cloud Support Plans page https://intl.aliyun.com/support/after-sales

Do they realise I am not a US citizen and entering the amount will not work?

Alibaba Support (Official)

Support Engineer : Dear customer, we apologize for the bad experience you have suffered in micro charge verification, we have engaged the product and development team to make the feature enhancement.

Some of the bank may do not display the USD and random code in the billing statement, we are contacting the bank for further investigation. Currently, you have deleted the adding in the console, would you please try to add the card again to get the authorization & micro charge?

If you have problem in verification, please provide the screenshot of both authorization hold & micro charge in billing statement, we can bypass the micro charge for you after evaluation.

Feel free to contact us if there are any other questions, thanks for your support.

Alibaba Pros

  • Australian Hosting Locations
  • Robust Signup Verifications (email, phone and bank), so much so I am not verified.
  • Cheap (if you can get in)???

Alibaba Cons

  • Broken Signup
  • I can’t create a VM due to signup verification
  • No Authenticator Two-Factor Authentication at login.
  • Alibaba Cloud iOS app is not available in English
  • Support is lacking.

Alibaba Cloud iOS App



Conclusion

I don’t know how good Alibaba Cloud is as it appears the signup is broken for Australian customers. For now, stick with Digital Ocean and AWS.

I have received an email from Alibaba saying they need more information.

“> Please provide more information so that we can deal with the problem.
> You can log on Aliyun.com or click here to submit feedback.”  I tried Alibaba suggestion of entering the bank account verification account but that did not work. Alibaba needs better support.  Do they realise I am not a US citizen and entering the amount will not work?

If You sign up for Alibaba Cloud here you will get $300 of New User Free Credit.

Deploy a server on Vultr for as low as $2.5 a month. Read my Vultr setup guide here.

or

Easily deploy an SSD cloud server on @DigitalOcean in 55 seconds. Sign up using my link and receive $10 in credit (2 free months): https://www.digitalocean.com

Read my other guides

    • Creating an AWS EC2 Ubuntu 14.04 server with NGINX, Node and MySQL and phpMyAdmin
    • How to buy a new domain and SSL cert from NameCheap, a Server from Digital Ocean and configure it.
    • The quickest way to setup a scalable development ide and web server
    • Adding a commercial SSL certificate to a Digital Ocean VM
    • Application scalability on a budget (my journey)

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.6 Added Vultr info.

Filed Under: Backup, Hosting, IoT, Server, Tech Advice Tagged With: alibaba, hosting, vm

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2
  • Go to page 3

Primary Sidebar

Poll

What would you like to see more posts about?
Results

Support this Blog

Create your own server today (support me by using these links

Create your own server on UpCloud here ($25 free credit).

Create your own server on Vultr here.

Create your own server on Digital Ocean here ($10 free credit).

Remember you can install the Runcloud server management dashboard here if you need DevOps help.

Advertisement:

Tags

2FA (9) Advice (17) Analytics (9) App (9) Apple (10) AWS (9) Backup (21) Business (8) CDN (8) Cloud (49) Cloudflare (8) Code (8) Development (26) Digital Ocean (13) DNS (11) Domain (27) Firewall (12) Git (7) Hosting (18) HTTPS (6) IoT (9) LetsEncrypt (7) Linux (20) Marketing (11) MySQL (24) NGINX (11) NodeJS (11) OS (10) PHP (13) Scalability (12) Scalable (14) Security (44) SEO (7) Server (26) Software (7) SSH (7) ssl (17) Tech Advice (9) Ubuntu (39) Uncategorized (23) UpCloud (12) VM (44) Vultr (24) Website (14) Wordpress (25)

Disclaimer

Terms And Conditions Of Use All content provided on this "www.fearby.com" blog is for informational purposes only. Views are his own and not his employers. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. Never make changes to a live site without backing it up first.

Advertisement:

Footer

Popular

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Add Google AdWords to your WordPress blog

Security

  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • Setting up DNSSEC on a Namecheap domain hosted on UpCloud using CloudFlare
  • Set up Feature-Policy, Referrer-Policy and Content Security Policy headers in Nginx
  • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
  • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
  • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
  • Beyond SSL with Content Security Policy, Public Key Pinning etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Run an Ubuntu VM system audit with Lynis
  • Securing Ubuntu in the cloud
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider

Code

  • How to code PHP on your localhost and deploy to the cloud via SFTP with PHPStorm by Jet Brains
  • Useful Java FX Code I use in a project using IntelliJ IDEA and jdk1.8.0_161.jdk
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider
  • How to setup PHP FPM on demand child workers in PHP 7.x to increase website traffic
  • Installing Android Studio 3 and creating your first Kotlin Android App
  • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
  • How to use Sublime Text editor locally to edit code files on a remote server via SSH
  • Creating your first Java FX app and using the Gluon Scene Builder in the IntelliJ IDEA IDE
  • Deploying nodejs apps in the background and monitoring them with PM2 from keymetrics.io

Tech

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Is OSX Mojave on a 2014 MacBook Pro slower or faster than High Sierra
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • The case of the overheating Mac Book Pro and Occam’s Razor
  • Useful Linux Terminal Commands
  • Useful OSX Terminal Commands
  • Useful Linux Terminal Commands
  • What is the difference between 2D, 3D, 360 Video, AR, AR2D, AR3D, MR, VR and HR?
  • Application scalability on a budget (my journey)
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.

Wordpress

  • Replacing Google Analytics with Piwik/Matomo for a locally hosted privacy focused open source analytics solution
  • Setting web push notifications in WordPress with OneSignal
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Wordfence Security Plugin for WordPress
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
  • Moving WordPress to a new self managed server away from CPanel
  • Moving WordPress to a new self managed server away from CPanel

General

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Using the WinSCP Client on Windows to transfer files to and from a Linux server over SFTP
  • Connecting to a server via SSH with Putty
  • Setting web push notifications in WordPress with OneSignal
  • Infographic: So you have an idea for an app
  • Restoring lost files on a Windows FAT, FAT32, NTFS or Linux EXT, Linux XFS volume with iRecover from diydatarecovery.nl
  • Building faster web apps with google tools and exceed user expectations
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..

Copyright © 2022 · News Pro on Genesis Framework · WordPress · Log in

Some ads on this site use cookies. You can opt-out if of local analytics tracking by scrolling to the bottom of the front page or any article and clicking "You are not opted out. Click here to opt out.". Accept Reject Read More
GDPR, Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT