• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • Create a VM ($25 Credit)
  • Buy a Domain
  • 1 Month free Back Blaze Backup
  • Other Deals
    • Domain Email
    • Nixstats Server Monitoring
    • ewww.io Auto WordPress Image Resizing and Acceleration
  • About
  • Links

IoT, Code, Security, Server Stuff etc

Views are my own and not my employer's.

Personal Development Blog...

Coding for fun since 1996, Learn by doing and sharing.

Buy a domain name, then create your own server (get $25 free credit)

View all of my posts.

  • Cloud
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to buy a new domain and SSL cert from NameCheap, a Server from Digital Ocean and configure it.
    • Setting up a Vultr VM and configuring it
    • All Cloud Articles
  • Dev
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to setup pooled MySQL connections in Node JS that don’t disconnect
    • NodeJS code to handle App logins via API (using MySQL connection pools (1000 connections) and query parameters)
    • Infographic: So you have an idea for an app
    • All Development Articles
  • MySQL
    • Using the free Adminer GUI for MySQL on your website
    • All MySQL Articles
  • Perf
    • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Measuring VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 1 of 4
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Setting up a website to use Cloudflare on a VM hosted on Vultr and Namecheap
    • All Performance Articles
  • Sec
    • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
    • Using OWASP ZAP GUI to scan your Applications for security issues
    • Setting up the Debian Kali Linux distro to perform penetration testing of your systems
    • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
    • PHP implementation to check a password exposure level with Troy Hunt’s pwnedpasswords API
    • Setting strong SSL cryptographic protocols and ciphers on Ubuntu and NGINX
    • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
    • All Security Articles
  • Server
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All Server Articles
  • Ubuntu
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Useful Linux Terminal Commands
    • All Ubuntu Articles
  • VM
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All VM Articles
  • WordPress
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
    • How to backup WordPress on a host that has CPanel
    • Moving WordPress to a new self managed server away from CPanel
    • Moving a CPanel domain with email to a self managed VPS and Gmail
    • All WordPress Articles
  • All

digital ocean

Measuring VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 4 of 4

June 5, 2018 by Simon

How can you measure VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 4 of 4

Read Part 1, Part 2, Part 3 or Part 4

I ran the MySQL benchmark preparation command again (no problem this time).

sysbench --test=oltp --oltp-table-size=1000000 --db-driver=mysql --mysql-db=test --mysql-user=root --mysql-password=###################### prepare
sysbench 0.4.12:  multi-threaded system evaluation benchmark

Creating table 'sbtest'...
Creating 1000000 records in table 'sbtest'...

Test table and records created

Test Records Created

Now I can benchmark MySQL on my main server.

sysbench --test=oltp --oltp-table-size=1000000 --db-driver=mysql --mysql-db=test --mysql-user=root --mysql-password=################################# --max-time=60 --oltp-read-only=on --max-requests=0 --num-threads=8 run

RAW Output

sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 8

Doing OLTP test.
Running mixed OLTP test
Doing read-only test
Using Special distribution (12 iterations,  1 pct of values are returned in 75 pct cases)
Using "BEGIN" for starting transactions
Using auto_inc on the id column
Threads started!
Time limit exceeded, exiting...
(last message repeated 7 times)
Done.

OLTP test statistics:
    queries performed:
        read:                            336210
        write:                           0
        other:                           48030
        total:                           384240
    transactions:                        24015  (400.09 per sec.)
    deadlocks:                           0      (0.00 per sec.)
    read/write requests:                 336210 (5601.24 per sec.)
    other operations:                    48030  (800.18 per sec.)

Test execution summary:
    total time:                          60.0242s
    total number of events:              24015
    total time taken by event execution: 480.0242
    per-request statistics:
         min:                                  1.79ms
         avg:                                 19.99ms
         max:                                141.00ms
         approx.  95 percentile:              37.49ms

Threads fairness:
    events (avg/stddev):           3001.8750/27.36
    execution time (avg/stddev):   60.0030/0.01

Results

queries performed (in 60 seconds):

  • read: 336210
  • other: 48030
  • total: 384240

I decided to add an index to see if I can speed this query up (read the MySQL index page here). I added an index (in Adminer) on the columns “Id” and “pad” for the sbtest table in the test database

I restarted the MySQL process

mysql restart
[ ok ] Restarting mysql (via systemctl): mysql.service.

I ran the same benchmark again.

Raw Output

sysbench --test=oltp --oltp-table-size=1000000 --db-driver=mysql --mysql-db=test --mysql-user=root --mysql-password=########################## --max-time=60 --oltp-read-only=on --max-requests=0 --num-threads=8 run
sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 8

Doing OLTP test.
Running mixed OLTP test
Doing read-only test
Using Special distribution (12 iterations,  1 pct of values are returned in 75 pct cases)
Using "BEGIN" for starting transactions
Using auto_inc on the id column
Threads started!
Time limit exceeded, exiting...
(last message repeated 7 times)
Done.

OLTP test statistics:
    queries performed:
        read:                            426538
        write:                           0
        other:                           60934
        total:                           487472
    transactions:                        30467  (507.69 per sec.)
    deadlocks:                           0      (0.00 per sec.)
    read/write requests:                 426538 (7107.67 per sec.)
    other operations:                    60934  (1015.38 per sec.)

Test execution summary:
    total time:                          60.0110s
    total number of events:              30467
    total time taken by event execution: 479.9124
    per-request statistics:
         min:                                  5.75ms
         avg:                                 15.75ms
         max:                                138.57ms
         approx.  95 percentile:              25.10ms

Threads fairness:
    events (avg/stddev):           3808.3750/8.70
    execution time (avg/stddev):   59.9891/0.00

Results

The quick index added 20% extra throughput on queries 🙂

Mysql before and after an index

Don’t forget to delete your test database

DROP DATABASE `test`;

Viewing MySQL Index Usage (on the “test” database)

Query to show Index stats for a table ‘test’

SELECT
 OBJECT_SCHEMA as 'Database', OBJECT_NAME as 'Table', 
 INDEX_NAME as 'Index', 
 COUNT_STAR, 
 SUM_TIMER_WAIT,  MIN_TIMER_WAIT, AVG_TIMER_WAIT, MAX_TIMER_WAIT, 
 COUNT_READ, 
 SUM_TIMER_READ, MIN_TIMER_READ, AVG_TIMER_READ, MAX_TIMER_READ,  
 COUNT_FETCH, SUM_TIMER_FETCH, MIN_TIMER_FETCH, AVG_TIMER_FETCH, MAX_TIMER_FETCH
FROM 
 performance_schema.table_io_waits_summary_by_index_usage
WHERE 
 object_schema = 'test'

I can see the MySQL PRIMARY index is getting used 🙂

Index Summary

Read more in viewable query stats (columns) here.

Other System Information Tools

Show processor information

cat /proc/cpuinfo

Output

processor       : 0
vendor_id       : GenuineIntel
cpu family      : 6
model           : 61
model name      : Virtual CPU a7769a6388d5
stepping        : 2
microcode       : 0x1
cpu MHz         : 2394.454
cache size      : 16384 KB
physical id     : 0
siblings        : 1
core id         : 0
cpu cores       : 1
apicid          : 0
initial apicid  : 0
fpu             : yes
fpu_exception   : yes
cpuid level     : 13
wp              : yes
flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx rdtscp lm constant_tsc rep_good nopl xtopology eagerfpu pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm abm invpcid_single kaiser fsgsbase bmi1 avx2 smep bmi2 erms invpcid xsaveopt arat
bugs            : cpu_meltdown spectre_v1 spectre_v2 spec_store_bypass
bogomips        : 4788.90
clflush size    : 64
cache_alignment : 64
address sizes   : 40 bits physical, 48 bits virtual
power management:

Memory Information

You can assign 512MB, 1GB, 2GB or more memory to a server on Vultr, Read my guide here on upgrading resources for Vultr VM’s here.

Only upgrade your server’s memory when server processes demand it, there is no need to pay for extra idle memory. Read my older guides on upgrading Digital Ocean and AWS servers.

I use the htop utility to monitor memory and processes. The memory usage will depend on how you have configured your server to use connection pools in code, MySQL or services.  Also what memory demands do you get in pean bandwidth times?

HTOP

You can check your server memory details on Ubuntu with this command

cat /proc/meminfo

Output

MemTotal:        2048104 kB
MemFree:           96176 kB
MemAvailable:     693072 kB
Buffers:          183476 kB
Cached:           526124 kB
SwapCached:            0 kB
Active:          1467220 kB
Inactive:         243228 kB
Active(anon):    1070464 kB
Inactive(anon):    27004 kB
Active(file):     396756 kB
Inactive(file):   216224 kB
Unevictable:        3652 kB
Mlocked:            3652 kB
SwapTotal:             0 kB
SwapFree:              0 kB
Dirty:                64 kB
Writeback:             0 kB
AnonPages:       1004504 kB
Mapped:           114664 kB
Shmem:             94192 kB
Slab:             192692 kB
SReclaimable:     171892 kB
SUnreclaim:        20800 kB
KernelStack:        3072 kB
PageTables:        20528 kB
NFS_Unstable:          0 kB
Bounce:                0 kB
WritebackTmp:          0 kB
CommitLimit:     1024052 kB
Committed_AS:    2424332 kB
VmallocTotal:   34359738367 kB
VmallocUsed:           0 kB
VmallocChunk:          0 kB
HardwareCorrupted:     0 kB
AnonHugePages:    247808 kB
CmaTotal:              0 kB
CmaFree:               0 kB
HugePages_Total:       0
HugePages_Free:        0
HugePages_Rsvd:        0
HugePages_Surp:        0
Hugepagesize:       2048 kB
DirectMap4k:       67440 kB
DirectMap2M:     2029568 kB

Use Memory or Disk (Swap)

You can configure the use of Memory over Disk by configuring your/etc/sysctl.conf file (setting value “vm.swappiness”)

You can check your swap file settings by running the following command

cat /proc/sys/vm/swappiness
1

Or By running

sysctl vm.swappiness
vm.swappiness = 1

Set a new swap file value by editing /etc/sysctl.conf

sudo nano /etc/sysctl.conf

Set the following to use more ram over the swap disk.

vm.swappiness = 1

Read about swappiness values here: https://en.wikipedia.org/wiki/Swappiness

Service Performance

Performance (and allocated resources) depends on the demands of your operating system and installed software

What operating system do you have?

lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description:    Ubuntu 16.04.4 LTS
Release:        16.04
Codename:       xenial

View NGINX Status, how much memory does it use?

/etc/init.d/nginx status
● nginx.service - A high performance web server and a reverse proxy server
   Loaded: loaded (/lib/systemd/system/nginx.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2018-05-25 21:28:25 AEST; 1 weeks 3 days ago
     Docs: man:nginx(8)
 Main PID: #### (nginx)
    Tasks: 3
   Memory: 58.9M
      CPU: 33min 11.515s
   CGroup: /system.slice/nginx.service
           ├─#### nginx: master process /usr/sbin/nginx -g daemon on; master_process on;
           ├─#### nginx: worker process
           └─#### nginx: cache manager process

PHP (and Child Worker) status how much memory does it use and how many child workers do you have? Read my add PHP child workers post here (and update to PHP 7.2 here)

sudo service php7.2-fpm status
● php7.2-fpm.service - The PHP 7.2 FastCGI Process Manager
   Loaded: loaded (/lib/systemd/system/php7.2-fpm.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2018-05-25 21:28:26 AEST; 1 weeks 3 days ago
     Docs: man:php-fpm7.2(8)
 Main PID: #### (php-fpm7.2)
   Status: "Processes active: 0, idle: 20, Requests: 75911, slow: 0, Traffic: 0.1req/sec"
    Tasks: 21
   Memory: 694.2M
      CPU: 20h 49min 45.132s
   CGroup: /system.slice/php7.2-fpm.service
           ├─ #### php-fpm: master process (/etc/php/7.2/fpm/php-fpm.conf)
           ├─ #### php-fpm: pool www-acc
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-acc
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           ├─ #### php-fpm: pool www-usr
           └─ #### php-fpm: pool www-usr

MySQL Status

sudo service mysql status
● mysql.service - MySQL Community Server
   Loaded: loaded (/lib/systemd/system/mysql.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2018-05-25 21:28:27 AEST; 1 weeks 3 days ago
 Main PID: ##### (mysqld)
    Tasks: 35
   Memory: 405.9M
      CPU: 2h 17min 31.822s
   CGroup: /system.slice/mysql.service
           └─#### /usr/sbin/mysqld

Shared VM Hosts

One of the biggest impacts (after server latency) for your server is not the disk performance but the number of hosts/websites on the server who are also using the disk and server resources.

Reverse IP Lookup

I have 80 other web servers on my server (based on a reverse lookup).

I may move to a dedicated box when I can afford it.

Security

Above all else ensure that security is number 1 priority and make performance second priority.

Scan your site with Zap, Qualys and Kali Linux. Performance means nothing if you are hacked.

website-report

Simulated concurrent users

You can use Siege to test the maximum concurrent users accessing your site before the server starts to drop connections.

FYI: If you use Cloudflare (you should) this may not work as it will block connections.

Install Siege

sudo apt-get install siege

Test  your server with 10 concurrent serves for 1 minute

siege -t1m c10 'https://yourserver.com/'

Results

siege -t1m c10 'https://yourserver.com/'
** SIEGE 3.0.8
** Preparing 15 concurrent users for battle.
The server is now under siege...
Lifting the server siege...      done.

Transactions:                    417 hits
Availability:                 100.00 %
Elapsed time:                  59.01 secs
Data transferred:               8.24 MB
Response time:                  1.62 secs
Transaction rate:               7.07 trans/sec
Throughput:                     0.14 MB/sec
Concurrency:                   11.46
Successful transactions:         417
Failed transactions:               0
Longest transaction:            2.26
Shortest transaction:           1.49

Keep upping the connections (from 10 above) to a limit where connections start dropping.

I tried 25 then 50 concurrent users hitting a server on Digital Ocean and it did not fail.

Conclusion

  • Choose a server near your customers
  • Change hosts if one is faster and cheaper
  • Measure or benchmark your server (and compare over time).
  • Use Cloudflare

Create your own server today

  • Create your own server on Vultr here.
  • Create your own server on Digital Ocean here.
  • Create your own server on UpCloud here.

And remember you can install the Runcloud server management dashboard here.

I hope this guide helps someone.

< Previous

Read Part 1, Part 2, Part 3 or Part 4

Filed Under: Cloud, Digital Ocean, disk, Domain, Linux, NGINX, Performance, PHP, php72, Scalability, Scalable, Speed, Storage, Ubuntu, UpCloud, Vultr, Wordpress Tagged With: and, can, comparing, Concurrent Users etc, cpu, digital ocean, Disk, How, Latency, measure, on, Performance, ubuntu, UpCloud - Part 4 of 4, vm, vultr, you

Measuring VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 3 of 4

June 5, 2018 by Simon

How can you measure VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 3 of 4

Read Part 1, Part 2, Part 3 or Part 4

I used these commands to generate bonnie++ reports from the data in part 2

echo "<h1>Bonnie Results</h1>" > /www-data/bonnie.html
echo "<h2>Vultr (Sydney)</h2>" >> /www-data/bonnie.html
echo "1.97,1.97,servername,1,1528177870,4G,,656,99,308954,68,113706,33,1200,92,188671,30,10237,251,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,26067us,119ms,179ms,29139us,26069us,16118us,1463us,703us,880us,263us,119us,593us" | bon_csv2html >> /www-data/bonnie.html
echo "<h2>Digital Ocean (London)</h2>" >> /www-data/bonnie.html
echo "1.97,1.97,servername,1,1528186398,4G,,699,99,778636,74,610414,60,1556,99,1405337,59,+++++,+++,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,17678us,10099us,17014us,7027us,3067us,2366us,1243us,376us,611us,108us,59us,181us" | bon_csv2html >> /www-data/bonnie.html
echo "<h2>UpCloud (Singapore)</h2>" >> /www-data/bonnie.html
echo "1.97,1.97,servername,1,1528226703,4G,,1014,99,407179,24,366622,32,2137,99,451886,17,+++++,+++,16,,,,,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,11297us,54232us,16443us,4949us,44883us,1595us,264us,340us,561us,138us,66us,327us" | bon_csv2html >> /www-data/bonnie.html

Image of results here

Bonnie Results

Network Performace

IMHO Network Latency is the biggest impact on server performance, Read my old post on scalability on a budget here. I am in Australia an having a server in Singapore was too far away and latency was terrible.

Here is a non-scientific example of pinging a Vultr, Digital Ocean and UpCloud server in three different locations (and Google).

Ping Test

Test Ping Results

  • Vultr 132ms Ping Average (Sydney)
  • Digital Ocean 322ms Ping Average (London)
  • UpCloud 180ms Ping Average (Singapore)

Latency matters, run a https://www.webpagetest.org/ scan over your site to see why.

Adding https added almost 0.7 seconds to https communications in the past on Digital Ocean (a few thousand kilometres away). The longer the latency the longer HTTPS handshakes take.

SSL

Deploying a server to Singapore (in my experience) is bad if your visitors are in Australia. But deploying to other regions may be lower in cost though. It’s a trade-off.

Server Location

Deploy servers as close as you can to your customers is the best tip for performance.

Deploy serves close to your customers

Also, consider setting up Image Optimization and Image CDN plugins (guide here) in WordPress and using Cloudflare (guide here)

Benchmarking with SysBench

Install CPU Benchmark

sudo apt-get install sysbench

CPU Benchmark (Vultr/Sydney)

Result

sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Doing CPU performance benchmark

Threads started!
Done.

Maximum prime number checked in CPU test: 20000


Test execution summary:
    total time:                          39.1700s
    total number of events:              10000
    total time taken by event execution: 39.1586
    per-request statistics:
         min:                                  2.90ms
         avg:                                  3.92ms
         max:                                 20.44ms
         approx.  95 percentile:               7.43ms

Threads fairness:
    events (avg/stddev):           10000.0000/0.00
    execution time (avg/stddev):   39.1586/0.00

39.15 seconds

CPU Benchmark (Digital Ocean/London)

sysbench --test=cpu --cpu-max-prime=20000 run

Result

sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1

Doing CPU performance benchmark

Threads started!
Done.

Maximum prime number checked in CPU test: 20000


Test execution summary:
    total time:                          33.4382s
    total number of events:              10000
    total time taken by event execution: 33.4352
    per-request statistics:
         min:                                  3.24ms
         avg:                                  3.34ms
         max:                                  6.45ms
         approx.  95 percentile:               3.45ms

Threads fairness:
    events (avg/stddev):           10000.0000/0.00
    execution time (avg/stddev):   33.4352/0.00

33.43 sec

CPU Benchmark (UpCloud/Singapore)

sysbench --test=cpu --cpu-max-prime=20000 run

Result

sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1



Doing CPU performance benchmark

Threads started!
Done.

Maximum prime number checked in CPU test: 20000


Test execution summary:
    total time:                          23.7809s
    total number of events:              10000
    total time taken by event execution: 23.7780
    per-request statistics:
         min:                                  2.35ms
         avg:                                  2.38ms
         max:                                  6.92ms
         approx.  95 percentile:               2.46ms

Threads fairness:
    events (avg/stddev):           10000.0000/0.00
    execution time (avg/stddev):   23.7780/0.00

23.77 sec

Surprisingly, 1st place in prime generation goes to UpCloud, then Digital Ocean then Vultr.  UpCloud has some good processors.

Processors:

  • UpCLoud (Singapore): Intel(R) Xeon(R) CPU E5-2687W v4 @ 3.00GHz
  • Digital Ocean (London): Intel(R) Xeon(R) CPU E5-2630L v2 @ 2.40GHz
  • Vultr (Sydney): Virtual CPU a7769a6388d5 (Masked/Hidden CPU @ 2.40GHz)

(Lower is better)

prime benchmark results

(oops, typo in the chart should say Vultr)

Benchmark the file IO

Confirm free space

df -h /

Install Sysbench

sudo apt-get install sysbench

I had 10GB free on all servers (Vultr, Digitial Ocean and UpCloud) so I created a 10GB test file.

sysbench --test=fileio --file-total-size=10G prepare
sysbench 0.4.12:  multi-threaded system evaluation benchmark

128 files, 81920Kb each, 10240Mb total
Creating files for the test...

Now I can run the benchmark and use the pre-created text file.

sysbench --test=fileio --file-total-size=10G --file-test-mode=rndrw --init-rng=on --max-time=300 --max-requests=0 run

SysBench description from the Ubuntu manpage.

“SysBench is a modular, cross-platform and multi-threaded benchmark tool for evaluating OS parameters that are important for a system running a database under intensive load. The idea of this benchmark suite is to quickly get an impression about system performance without setting up complex database benchmarks or even without installing a database at all.”

SysBench Results (Vultr/Sydney)

sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1
Initializing random number generator from timer.


Extra file open flags: 0
128 files, 80Mb each
10Gb total file size
Block size 16Kb
Number of random requests for random IO: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!
Time limit exceeded, exiting...
Done.

Operations performed:  385920 Read, 257280 Write, 823266 Other = 1466466 Total
Read 5.8887Gb  Written 3.9258Gb  Total transferred 9.8145Gb  (33.5Mb/sec)
 2143.98 Requests/sec executed

Test execution summary:
    total time:                          300.0026s
    total number of events:              643200
    total time taken by event execution: 182.4249
    per-request statistics:
         min:                                  0.01ms
         avg:                                  0.28ms
         max:                                 18.12ms
         approx.  95 percentile:               0.55ms

Threads fairness:
    events (avg/stddev):           643200.0000/0.00
    execution time (avg/stddev):   182.4249/0.00

SysBench Results (Digital Ocean/London)

sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1
Initializing random number generator from timer.


Extra file open flags: 0
128 files, 80Mb each
10Gb total file size
Block size 16Kb
Number of random requests for random IO: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!
Time limit exceeded, exiting...
Done.

Operations performed:  944280 Read, 629520 Write, 2014432 Other = 3588232 Total
Read 14.409Gb  Written 9.6057Gb  Total transferred 24.014Gb  (81.968Mb/sec)
 5245.96 Requests/sec executed

Test execution summary:
    total time:                          300.0024s
    total number of events:              1573800
    total time taken by event execution: 160.5558
    per-request statistics:
         min:                                  0.00ms
         avg:                                  0.10ms
         max:                                 18.62ms
         approx.  95 percentile:               0.34ms

Threads fairness:
    events (avg/stddev):           1573800.0000/0.00
    execution time (avg/stddev):   160.5558/0.00

SysBench Results (UpCloud/Singapore)

sysbench 0.4.12:  multi-threaded system evaluation benchmark

Running the test with following options:
Number of threads: 1
Initializing random number generator from timer.


Extra file open flags: 0
128 files, 80Mb each
10Gb total file size
Block size 16Kb
Number of random requests for random IO: 0
Read/Write ratio for combined random IO test: 1.50
Periodic FSYNC enabled, calling fsync() each 100 requests.
Calling fsync() at the end of test, Enabled.
Using synchronous I/O mode
Doing random r/w test
Threads started!
Time limit exceeded, exiting...
Done.

Operations performed:  994320 Read, 662880 Write, 2121090 Other = 3778290 Total
Read 15.172Gb  Written 10.115Gb  Total transferred 25.287Gb  (86.312Mb/sec)
 5523.97 Requests/sec executed

Test execution summary:
    total time:                          300.0016s
    total number of events:              1657200
    total time taken by event execution: 107.4434
    per-request statistics:
         min:                                  0.00ms
         avg:                                  0.06ms
         max:                                 15.43ms
         approx.  95 percentile:               0.13ms

Threads fairness:
    events (avg/stddev):           1657200.0000/0.00
    execution time (avg/stddev):   107.4434/0.00

Comparison

Sysbench Results table

sysbench fileio results (text)

Read

  • Vultr (Sydney): 385,920
  • Digital Ocean (London): 944,280
  • UpCloud (Singapore): 944,320

Write

  • Vultr (Sydney): 823,266
  • Digital Ocean (London): 629,520
  • UpCloud (Singapore): 662,880

Other

  • Vultr (Sydney): 1,466,466
  • Digital Ocean (London): 3,588,232
  • UpCloud (Singapore): 2,121,090

Total Read Gb

  • Vultr (Sydney): 5.8887 Gb
  • Digital Ocean (London): 14.409 Gb
  • UpCloud (Singapore): 15.172 Gb

Total Written Gb

  • Vultr (Sydney): 3.9258 Gb
  • Digital Ocean (London): 9.6057 Gb
  • UpCloud (Singapore): 10.115 Gb

Total Transferred Gb

  • Vultr (Sydney): 9.8145 Gb
  • Digital Ocean (London): 24.014 Gb
  • UpCloud (Singapore): 25.287 Gb

Now I can remove test file io benchmark file

sysbench --test=fileio --file-total-size=2=10G cleanup
sysbench 0.4.12:  multi-threaded system evaluation benchmark

Removing test files...

Confirm the test file has been deleted

df -h /
Filesystem      Size  Used Avail Use% Mounted on
/dev/vda1        40G   16G   23G  41% /

Bonus: Benchmark MySQL (on my main server (Vultr) not on Digital Ocean and UpCLoud)

I tried to run a command

sysbench --test=oltp --oltp-table-size=1000000 --db-driver=mysql --mysql-db=test --mysql-user=root --mysql-password=#################################### prepare
sysbench 0.4.12:  multi-threaded system evaluation benchmark

FATAL: unable to connect to MySQL server, aborting...
FATAL: error 1049: Unknown database 'test'
FATAL: failed to connect to database server!

To fix the error I created a test table with Adminer (guide here).

Create Test Table

< Previous – Next >

Read Part 1, Part 2, Part 3 or Part 4

Filed Under: CDN, Cloud, Cloudflare, Digital Ocean, disk, ExactDN, Hosting, Performance, PHP, php72, Scalability, Scalable, Server, Speed, Storage, Ubuntu, UI, UpCloud, VM, Vultr Tagged With: and, can, comparing, Concurrent, cpu, digital ocean, Disk, etc, How, Latency, measure, on, Performance, ubuntu, UpCloud - Part 3 of 4, Users, vm, vultr, you

Setting up a Vultr VM and configuring it with Runcloud.io

July 27, 2017 by Simon

I have setup a Vultur VM manually (guide here). I decided to setup a VM software stack with Runcloud to see if it is as secure as fast (and saves me time).

Buying a Domain

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

I deployed an Ubuntu server in NY for $2.5 a month on Vultr (I chose NY over Sydney as the Sydney $2.5 servers were sold out)

I then signed up for Runcloud.

I opened up port 80, 443 and 34210 as recommended by runcloud.

I then connected to my Vultr server with Runcloud.

Then Runcloud asked that I run a  script on the server as root.

Tip: Don’t run the script on a different IP like I did.

It appears I accidentally ran the Runcloud install command on the wrong IP, what did it install/change? I looked to see if Runcloud offer an uninstallation command? nope.

Snip from runcloud.

Deleting Your Server

Deleting your server from RunCloud is permanent. You can’t reconnect it back after you have deleted your server. The only way to reconnect is to format your server and run the RunCloud’s installation again.

To completely uninstall RunCloud, you must reformat your server. We don’t have any uninstallation script (yet).

No uninstall?

Time to check out RunCLoud IDE at https://manage.runcloud.io to see what it looks like.

View Server Configuration

I was able to start a  NGINX installation/web server within a few clicks (it installed to /etc/nginx–rc)

Runcloud.io Pros

  • Setup was quick.
  • Dashboard looks pretty

Runcloud.io Cons

  • My root access is no longer working (what happened)? I did notice that Fail2Ban was blocking loads of IP’s?

  • I can’t seem to edit website files with runcloud.io?
  • Limited by UI (I could create a database and database user but now set database user permissions or assign database users to databases (there is a guide but no GUI on adding a  user to a DB))
  • I have to trust that runcloud have set things up securely.
  • CPanel UI has more options that Runcloud IMHO.
  • Other free domain managers exist like https://www.phpservermonitor.org

  • RunCoud.io RTFM? Seriously (F stands for, are customers really that bad)? https://runcloud.io/rtfm/

Domain

I linked a domain to the IP, now I just need to wait before continuing.

End of Guide

I have been locked out of my runcloud managed domain, I will stick to manually setup servers.

Being locked out is good for security I guess 🙁

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

V1.4 added tags and fixed typos.

v0.31 Domain linked

Filed Under: Cloud, Runcloud, Ubuntu Tagged With: digital ocean, MySQL, runcloud, UpCloud, vultr

Setting up a Digital Ocean Droplet as a sub domain on an AWS Domain

July 15, 2017 by Simon

This guide hopes to show you how to set up a Digital Ocean Droplet (server) as a Sub Domain on an existing AWS domain. I am setting up a Digital Ocean Domain as a subdomain (both existing) and using the subdomain (Digital Ocean server) as a self-service status page. I have set up both domains with SSL certificates and strong Content Security Policies and Public Key Pinning.

Read this newer post on setting up Subdomains.

DO: Obtain the IP addresses for your Digital Ocean Droplet (that will be the subdomain). If you don’t already have a Digital Ocean Droplet click here (and get 2 months free).

Login to your AWS Console for the parent domain. I have a guide on setting up an AWS domain here and Digital Ocean Domain here.

This AWS guide was a handy start Creating a Subdomain That Uses Amazon Route 53 as the DNS Service without Migrating the Parent Domain.

From the AWS Route 53 screen, I clicked Get started now.

From here you can Create a Hosted Zone.

Create a Hosted Zone.

A subdomain can be created on AWS Route 53.

I created a route 53 A Name record and pointed it to a known Digital Ocean droplet IP address.

I created an A Name record on Digital Ocean for the droplet (e.g status.______.com).

I created an IPV6 (AAAA) record on Digital Ocean for the droplet?

I could not ping the server so I added the digital ocean name servers to the route 53 record set out of desperation.

Final Route information on AWS.

Hmm, nothing works as of yet.

https://www.whatsmydns.net is not showing movement yet.

Time to contact AWS for advice.

I tried to post a help post on the AWS forums but apparently, a user who has been paying AWS for 6 months does not have the right to post a new forum thread.

I posted a few helpful questions on twitter and I’ll try these out tonight.

Replies

And…

Thanks, guys, I’ll try these tonight and update this post.

I created a recordset for the parent domain on AWS and A record for the Digital Ocean subdomain with no luck.

This post will be updated soon.

Read my guide on Securing an Ubuntu VM with a free LetsEncrypt SSL certificate in 1 Minute.

v1.9 added info on let’s encrypt (10:38pm 29th July 2017 AEST)

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

Filed Under: AWS, Digital Ocean, DNS, Route53 Tagged With: AWS, digital ocean, DNS

Application scalability on a budget (my journey)

August 12, 2016 by Simon Fearby

If you have read my other guides on https://www.fearby.com you may tell I like the self-managed Ubuntu servers you can buy from Digital Ocean for as low as $5 a month  (click here to get $10 in free credit and start your server in 5 minutes ). Vultr has servers as low as $2.5 a month. Digital Ocean is a great place to start up your own server in the cloud, install some software and deploy some web apps or backend (API/databases/content) for mobile apps or services.  If you need more memory, processor cores or hard drive storage simple shutdown your Digital Ocean server, click a few options to increase your server resources and you are good to go (this is called “scaling up“). Don’t forget to cache content to limit usage.

This scalability guide is a work in progress (watch this space). My aim is to get 2000 concurrent users a second serving geo queries (like PokeMon Go) for under $80 a month (1x server and 1x mongoDB cluster).  Currently serving 600~1200/sec.

Buying a Domain

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Estimating Costs

If you don’t estimate costs you are planning to fail.

"By failing to prepare you are preparing to fail." - Benjamin Frankin

Estimate the minimum users you need to remain viable and then the expected maximum uses you need to handle. What will this cost?

Planning for success

Anyone who has researched application scalability has come across articles on apps that have launched and crashed under load at launch.  Even governments can spend tens of millions on developing a scalable solution, plan for years and fail dismally on launch (check out the Australian Census disaster).  The Australian government contracted IBM to develop a solution to receive up to 15 million census submissions between the 28th of July to the 5th of September. IBM designed a system and a third party performance test planned up to 400 submissions a second but the maximum submissions received on census night before the system crashed was only o154 submissions a second. Predicting application usage can be hard, in the case of the Australian census the bulk of people logged on to submit census data on the recommended night of the 9th of August 2016.

Sticking to a budget

This guide is not for people with deep pockets wanting to deploy a service to 15 million people but for solo app developers or small non-funded startups on a serious budget.  If you want a very reliable scalable solution or service provider you may want to skip this article and check out services by the following vendors.

  • Firebase
  • Azure (good guides by Troy Hunt: here, here and here).
  • Amazon Web Services
  • Google Cloud
  • NGINX Plus

The above vendors have what seems like an infinite array of products and services that can form part of your solution but beware, the more products you use the more complex it will be and the higher the costs.  A popular app can be an expensive app. That’s why I like Digital Ocean as you don’t need a degree to predict and plan you servers average usage and buy extra resource credits if you go over predicted limits. With Digital Ocean you buy a virtual server and you get known Memory, Storage and Data transfer limits.

Let’s go over topics that you will need to consider when designing or building a scalable app on a budget.

Application Design

Your application needs will ultimately decide the technology and servers you require.

  • A simple business app that shares events, products and contacts would require a basic server and MySQL database.
  • A turn-based multiplayer app for a few hundred people would require more server resources and endpoints (a NGINX, NODEJS and an optimized MySQL database would be ok).
  • A larger augmented reality app for thousands of people would require a mix of databases and servers to separate the workload (a NGINX webserver and NodeJS powered API talking to a MySQL database to handle logins and a single server NoSQL database for the bulk of the shared data).
  • An augmented reality app with tens of thousands of users (a NGINX web server, NodeJS powered API talking to a MySQL database to handle logins and NoSQL cluster for the bulk of the shared data).
  • A business critical multi-user application with real-time chat – are you sure you are on a budget as this will require a full solution from Azure Firebase or Amazon Web Serers.

A native app, hybrid app or full web app can drastically change how your application works ( learn the difference here ).

Location, location, location.

You want your server and resources to be as close to your customers as possible, this is one rule that cannot be broken. If you need to spend more money to buy a server in a location closer to your customers do it.

My Setup

I have a Digital Ocean server with 2 cores and 2GB of ram in Singapore that I use to test and develop apps. That one server has MySQL, NGINX, NodeJS , PHP and many scripts running on it in the background.  I also have a MongoDB cluster (3 servers) running on AWS in Sydney via MongoDB.com.  I looked into CouchDB via Cloudant but needed the Geo JSON features with fair dedicated pricing. I am considering moving to an Ubuntu server off Digital Ocean (in Singapore) and onto AWS server (in Sydney). I am using promise based NodeJS calls where possible to prevent non blocking calls to the operating system, database or web.  Update: I moved to a Vultr domain (article here)

Here is a benchmark for HTTP and HTTPS request from Rural NSW to Sydney Australia, then Melbourne, then Adelaide the Perth then Singapore to a Node Server on an NGINX Server that does a call back to Sydney Australia to get a GeoQuery from a large database and return to back to the customer via Singapore.

SSL

SSL will add processing overheads and latency period.

Here is a breakdown of the hops from my desktop in Regional NSW making a network call to my Digital Ocean Server in Singapore (with private parts redacted or masked).

traceroute to destination-server-redacted.com (###.###.###.##), 64 hops max, 52 byte packets
 1  192-168-1-1 (192.168.1.1)  11.034 ms  6.180 ms  2.169 ms
 2  xx.xx.xx.xxx.isp.com.au (xx.xx.xx.xxx)  32.396 ms  37.118 ms  33.749 ms
 3  xxx-xxx-xxx-xxx (xxx.xxx.xxx.xxx)  40.676 ms  63.648 ms  28.446 ms
 4  syd-gls-har-wgw1-be-100 (203.221.3.7)  38.736 ms  38.549 ms  29.584 ms
 5  203-219-107-198.static.tpgi.com.au (203.219.107.198)  27.980 ms  38.568 ms  43.879 ms
 6  tengige0-3-0-19.chw-edge901.sydney.telstra.net (139.130.209.229)  30.304 ms  35.090 ms  43.836 ms
 7  bundle-ether13.chw-core10.sydney.telstra.net (203.50.11.98)  29.477 ms  28.705 ms  40.764 ms
 8  bundle-ether8.exi-core10.melbourne.telstra.net (203.50.11.125)  41.885 ms  50.211 ms  45.917 ms
 9  bundle-ether5.way-core4.adelaide.telstra.net (203.50.11.92)  66.795 ms  59.570 ms  59.084 ms
10  bundle-ether5.pie-core1.perth.telstra.net (203.50.11.17)  90.671 ms  91.315 ms  89.123 ms
11  203.50.9.2 (203.50.9.2) 80.295 ms  82.578 ms  85.224 ms
12  i-0-0-1-0.skdi-core01.bx.telstraglobal.net (Singapore) (202.84.143.2)  132.445 ms  129.205 ms  147.320 ms
13  i-0-1-0-0.istt04.bi.telstraglobal.net (202.84.243.2)  156.488 ms
    202.84.244.42 (202.84.244.42)  161.982 ms
    i-0-0-0-4.istt04.bi.telstraglobal.net (202.84.243.110)  160.952 ms
14  unknown.telstraglobal.net (202.127.73.138)  155.392 ms  152.938 ms  197.915 ms
15  * * *
16  destination-server-redacted.com (xx.xx.xx.xxx)  177.883 ms  158.938 ms  153.433 ms

160ms to send a request to the server.  This is on a good day when the Netflix Effect is not killing links across Australia.

Here is the route for a call from the server above to the MongoDB Cluster on an Amazon Web Services in Sydney from the Digital Ocean Server in Singapore.

traceroute to redactedname-shard-00-00-nvjmn.mongodb.net (##.##.##.##), 30 hops max, 60 byte packets
 1  ###.###.###.### (###.###.###.###)  0.475 ms ###.###.###.### (###.###.###.###)  0.494 ms ###.###.###.### (###.###.###.###)  0.405 ms
 2  138.197.250.212 (138.197.250.212)  0.367 ms 138.197.250.216 (138.197.250.216)  0.392 ms  0.377 ms
 3  unknown.telstraglobal.net (202.127.73.137)  1.460 ms 138.197.250.201 (138.197.250.201)  0.283 ms unknown.telstraglobal.net (202.127.73.137)  1.456 ms
 4  i-0-2-0-10.istt-core02.bi.telstraglobal.net (202.84.225.222)  1.338 ms i-0-4-0-0.istt-core02.bi.telstraglobal.net (202.84.225.233)  3.817 ms unknown.telstraglobal.net (202.127.73.137)  1.443 ms
 5  i-0-2-0-9.istt-core02.bi.telstraglobal.net (202.84.225.218)  1.270 ms i-0-1-0-0.pthw-core01.bx.telstraglobal.net (202.84.141.157)  50.869 ms i-0-0-0-0.pthw-core01.bx.telstraglobal.net (202.84.141.153)  49.789 ms
 6  i-0-1-0-5.sydp-core03.bi.telstraglobal.net (202.84.143.145)  107.395 ms  108.350 ms  105.924 ms
 7  i-0-1-0-5.sydp-core03.bi.telstraglobal.net (202.84.143.145)  105.911 ms 21459.tauc01.cu.telstraglobal.net (134.159.124.85)  108.258 ms  107.337 ms
 8  21459.tauc01.cu.telstraglobal.net (134.159.124.85)  107.330 ms unknown.telstraglobal.net (134.159.124.86)  101.459 ms  102.337 ms
 9  * unknown.telstraglobal.net (134.159.124.86)  102.324 ms  102.314 ms
10  * * *
11  54.240.192.107 (54.240.192.107)  103.016 ms  103.892 ms  105.157 ms
12  * * 54.240.192.107 (54.240.192.107)  103.843 ms
13  * * *
14  * * *
15  * * *
16  * * *
17  * * *
18  * * *
19  * * *
20  * * *
21  * * *
22  * * *
23  * * *
24  * * *
25  * * *
26  * * *
27  * * *
28  * * *
29  * * *
30  * * *

It appears Telstra Global or AWS block the tracking of network path closer to the destination so I will ping to see how long the trip takes

bytes from ec2-##-##-##-##.ap-southeast-2.compute.amazonaws.com (##.##.##.##): icmp_seq=1 ttl=50 time=103 ms

It is obvious the longest part of the response to the client is not the GeoQuery on the MongoDB cluster or processing in NodeJS but the travel time for the packet and securing the packet.

My server locations are not optimal, I cannot move the AWS MongoDB to Singapore because MongoDB doesn’t have servers in Singapore and Digital Ocean don’t have servers in Sydney.  I should move my services on Digital Ocean to Sydney but for now, let’s see how far this Digital Ocean Server in Singapore and MongoDB cluster in Sydney can go. I wish I knew about Vultr as they are like Digital Ocean but have a location in Sydney.

Security

Secure (SSL) communication is now mandatory for Apple and Android apps talking over the internet so we can’t eliminate that to speed up the connection but we can move the server. I am using more modern SSL ciphers in my SSL certificate so they may slow down the process also. Here is a speed test of my servers cyphers. If you use stronger security so I expect a small CPU hit.

cipherspeed

fyi: I have a few guides on adding a commercial SSL certificate to a Digital Ocean VM and Updating OpenSSL on a Digital Ocean VM. Guide on configuring NGINX SSL and SSL. Limiting ssh connection rates to prevent brute force attacks.

Server Limitations and Benchmarking

If you are running your website on a shared server (e.g CPanel domain) you may encounter resource limit warnings as web hosts and some providers want to charge you more for moderate to heavy use.

Resource Limit Is Reached 508
The website is temporarily unable to service your request as it exceeded resource limit. Please try again later.

I have never received a resource limit reached warning with Digital Ocean.

Most hosts  (AWS/Digital Ocean/Azure etc) all have limitations on your server and when you exceed a magical limit they restrict your server or start charging excess fees (they are not running a charity).  AWS and Azure have different terminology for CPU credits and you really need to predict your applications CPU usage to factor in the scalability and monthly costs. Servers and databases generally have a limited IOPS (Input/Output operations a second) and lower tier plans offer lower IOPS.  MongoDB Atlas lower tiers have < 120 IOPS a second, middle tiers have  240~2400 IOPS and higher tiers have 3,000,20,000 IOPS

Know your bottlenecks

The siege HTTP stress testing tool is good, the below command will throw 400 local HTTP connections to your website.

#!/bin/bash
siege -t1m -c400 'http://your.server.com/page'

The results seem a bit low: 47.3 trans/sec.  No failed transactions through 🙂

** SIEGE 3.0.5
** Preparing 400 concurrent users for battle.
The server is now under siege...
Lifting the server siege.. done.

Transactions: 2803 hits
Availability: 100.00 %
Elapsed time: 59.26 secs
Data transferred: 79.71 MB
Response time: 7.87 secs
Transaction rate: 47.30 trans/sec
Throughput: 1.35 MB/sec
Concurrency: 372.02
Successful transactions: 2803
Failed transactions: 0
Longest transaction: 8.56
Shortest transaction: 2.37

Sites like http://loader.io/ allow you to hit your web server or web page with many hits a second from outside of your server.  Below I threw 50 concurrent users at a node API endpoint that was hitting a geo query on my MongoDB cluster.

nodebench50c

The server can easily handle 50 concurrent users a second. Latency is an issue though.

nodebench50b

I can see the two secondary MongoDB servers being queried 🙂

nodebench50a

Node has decided to only use one CPU under this light load.

I tried 100 concurrent users over 30 seconds. CPU activity was about 40% of one core.

nodebench50d

I tried again with a 100-200 concurrent user limit (passed). CPU activity was about 50% using two cores.

nodebench50e

I tried again with a 200-400 concurrent user limit over 1 minute (passed). CPU activity was about 80% using two cores.

nodebench50f

nodebench50g

It is nice to know my promised based NodeJS code can handle 400 concurrent users requesting a large dataset from GeoJSON without timeouts. The result is about the same as siege (47.6 trans/sec) The issue now is the delay in the data getting back to the user.

I checked the MongoDB cluster and I was only reaching 0.17 IOPS (maximum 100) and 16% CPU usage so the database cluster is not the bottleneck here.

nodebench50h

Out of curiosity, I ran a 400 connection benchmark to the node server over HTTP instead of HTTPS and the results were near identical (400 concurrent connections with 8000ms delay).

I really need to move my servers closer together to avoid the delays in responding. 47.6 served geo queries (4,112,640 a day) a second with a large payload is ok but it is not good enough for my application yet.

Limiting Access

I may limit access to my API based on geo lookups ( http://ipinfo.io is a good site that allows you to programmatically limit access to your app services) and auth tokens but this will slow down uncached requests.

Scale Up

I can always add more cores or memory to my server in minutes but that requires a shutdown. 400 concurrent users do max my CPU and raise the memory to above 80% so adding more cores and memory would be beneficial.

Digital Ocean does allow me to permanently or temporarily raise the resources of the virtual machine. To obtain 2 more cores (4) and 4x the memory (8GB) I would need to jump to the $80/month plan and adjust the NGINX and Node configuration to use the more cores/ram.

nodebench50i

If my app is profitable I can certainly reinvest.

Scale Out

With MongoDB clusters, I can easily clone ( shard ) a cluster and gain extra throughput if I need it, but with 0.17% of my existing cluster being utilised I should focus on moving servers closer together.

NGINX does have commercial level products that handle scalability but this costs thousands. I could scale out manually by setting up a Node Proxies to point to multiple servers that receive parent calls. This may be more beneficial as Digital Ocean servers start at $5 a month but this would add a whole lot of complexity.

Cache Solutions

  • Nginx Caching
  • OpCache if you are using PHP.
  • Node-cache – In memory caching.
  • Redis – In memory caching.

Monitoring

Monitoring your server and resources is essential in detecting memory leaks and spikes in activity. HTOP is a great monitoring tool from the command line in Linux.

http://pm2.keymetrics.io/ is a good node package monitoring app but it does go a bit crazy with processes on your box.

CPU Busy

Communication

It is a good idea to inform users of server status and issues with delayed queries and when things go down inform people early. Update: Article here on self-service status pages.

censisfail

The Future

UPDATE: 17th August 2016

I set up an Amazon Web Services ECS server ( read AWS setup guide here ) with only 1 CPU and 1GB ram and have easily achieved 700 concurrent connections.  That’s 41,869 geo queries served a minute.

Creating an AWS EC2 Ubuntu 14.04 server with NGINX, Node and MySQL and phpMyAdmin

AWS MongoDB Test

The MongoDB Cluster CPU was 25% usage with  200 query opcounters on each secondary server.

I think I will optimize the AWS OS ‘swappiness’ and performance stats and aim for 2000 queries.

This is how many hits I can get with the CPU remaining under 95% (794 geo serves a second). AMAZING.

AWS MongoDB Test

Another recent benchmark:

AWS Benchmark

UPDATE: 3rd Jan 2017

I decided to ditch the cluster of three AWS servers running MongoDB and instead setup a single MongoDB instance on an Amazon t2.medium server (2 CPU/4GB ram) server for about $50 a month. I can always upgrade to the AWS MongoDB cluster later if I need it.

Ok, I just threw 2000 concurrent users at the new AWS single server MongoDB server and the server was able to handle the delivery (no dropped connections but the average response time was 4,027 ms, this is not ideal but this is 2000 users a second (and that is after API handles the ip (banned list), user account validity, last 5 min query limit check (from MySQL), payload validation on every field and then  MongoDB geo query).

scalability_budget_2017_001

The two cores on the server were hitting about 95% usage. The benchmark here is the same dataset as above but the API has a whole series of payload, user limiting, and logging

Benchmarking with 1000 maintained users a second the average response time is a much lower 1,022 ms. Honestly, if I have 1000-2000 users queries a second I would upgrade the server or add in a series of lower spec AWS t2.miro servers and create my own cluster.

Security

Cheap may not be good (hosting or DIY), do check your website often in https://www.shodan.io and see if it has open software or is known to hackers.
If this guide has helped please consider donating a few dollars.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.7 added self-service status page info and Vultr info

Short: https://fearby.com/go2/scalability/

Filed Under: Cloud, Development, Domain, Hosting, MySQL, NodeJS, Scalability, Scalable, Security, ssl, VM Tagged With: api, digital ocean, mongodb, scalability, server

How to buy a new domain (dedicated server from digital ocean) and add a SSL certificate from namecheap.

December 3, 2015 by Simon Fearby

This guide will show you how to buy a domain and and link it to a Digital Ocean VM.

Update (June 2018): I don’t use Digital Ocean anymore. I moved my domain to UpCloud (they are that awesome). Use this link to signup and get $25 free credit. Read the steps I took to move my domain to UpCloud here.

Upcloud Site Speed in GTMetrix

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

This old post is available fyi,

1. How to buy a new website domain from namecheap.com

1.1 Create an account at namecheap.com then navigate to registrations

1.2 Search for your domain (don’t forget to click show more to see other domain extension types).

1.3 Select the domain you want.

1.4 I am going to opt for a free year of Free WhoisGuard – (WhoisGuard is a service that allows customers to keep their domain contact details hidden from spammers, marketing firms and online fraudsters. When purchased, the WhoisGuard subscription is permanently assigned to a domain and stays attached to it as long as the fee is paid).

1.5 I will also opt-in into the discounted PositiveSSL for $2.74 (bargain) (fyi: name cheap ssl types).

1.6 Check the name cheap coupons page and apply this months coupon for 10% off.

1.7 Confirmed the order for $11.05 USD.

1.8 Congratulations you have just ordered a domain and SSL certificate.

More details: https://www.digitalocean.com/community/tutorials/how-to-point-to-digitalocean-nameservers-from-common-domain-registrars

2. Create a http://www.c9.io account

This will give you a nice UI to manager your unmanaged server.

2.1 Upgrade from the free account to the “Micro $9.00 / monthly” at https://c9.io/account/billing (this will allow you to use the c9.io IDE to connect to as many Ubuntu VM’s as you wish).

3. Buy the hosting (droplet) from digital ocean

3.1 Go to https://wwww.digitalocean.com and create an account and log in.

Note: If you are adding an additional server (droplet) to a digital ocean account and you want the droplets to talk to each other make sure your existing servers have a private network setup.

3.2 Click Create Droplet

3.3 Enter a server name: e.g “yourdomainserver”

3.4 Select a Server Size (this can be upgraded later), Digital Ocean recommends a server with at least 30GB for a WordPress install (but you can upgrade later).

3.5 Select an Image (you can stick with a plain ubuntu image) but it may save you time to install an image with the LAMP stack already on it.

LAMP stack is a popular open-source web platform commonly used to run dynamic websites and servers. It includes Linux, Apache, MySQL, and PHP/Python/Perl and is considered by many the platform of choice for development of high-performance web applications which require a solid and reliable foundation.  I will select LAMP.

3.6 Tick “private networking” if you think you may add more servers later (growing business)?

3.7 Paste in your SSH key from your c9.io account at https://c9.io/account/ssh (this is important, don’t skip this).

3.8 Click Create Droplet

3.9 Congratulations you have just created an Ubuntu VM in the cloud.

3.10 If you type your droplets IP into a web browser it should load your pages from your web server.

3.11 You can view your ubuntu droplet details in the digital ocean portal.  You may need to reboot the server, make snapshots (backups) of reset passwords here.

3.12 You will need to change your droplets root password that was emailed to you from digital ocean (if you never received one you can reset a root password change in the digitalocean.com portal).  You can change your password by using the VNC window in the digital ocean portal https://cloud.digitalocean.com/droplets/ -> Access -> Console Access). If you had no luck changing you password with the VNC method you may use your Mac terminal and type: ssh [email protected] (where xx is your droplets IP) – then type yes, enter your password from the digital ocean email and change the password to a new/strong password (and write it down).

3.13 Now we will need to install the distro stable nodejs (for c9.io IDE) into the droplet by typing “sudo apt-get update” then “sudo apt-get install nodejs“.

4. Now we can link the digital ocean ubuntu server to the http://www.c9.io IDE.

4.1 Login to your c9.io account.

4.2 Click Create a new workspace.

4.3 Enter a Workspace name and description.

4.4 Click Remote SSH Workspace

4.5 Enter “root” as the username

4.6 Type in your new servers IP (obtained from viewing your droplet at digital ocean https://cloud.digitalocean.com/droplets ).

4.6 Set the initial path as: ./

4.7 Set the NodeJS path as: /user/bin/nodejs

4.7 Ensure your SSH key is the same one you entered ito the droplet.

4.8 Click Create Workspace.

Troubleshooting: If your workspace cannot login you may need to SSH back into your droplet (via Digital ocean VNC or telnet SSH and paste your c9.io SSH key into the ~/authorized_keys file and save it). I used the command “sudo nano ~/.ssh/authorized_keys”, pasted in my c9.io SSH key then pressed CTRL+0 then ENTER then CRRL+X

4.9 If all goes well you will see c9.io now has a workspace shortcut for you to launch your website.

4.10 You will be able to connect to your droplet from c9.io and edit files or upload files (without the hassle of using SFTP and CPanel).

5. No we will link the domain name to the IP based droplet.

5.1 Login to your name cheap account.

5.2 Click “Account” then  “Domain List“, turn off domain parking and then click  “Manage”  (next to the new domain) then click “Advanced DNS”

5.3 Click “Edit” next to “Domain Nameserver Type” then choose “Custom“.

5.4 Add the following three name servers “ns1.digitalocean.com“, “ns2.digitalocean.com” and “ns3.digitalocean.com” and click “Save Changes“.

namecheapnameservers

5.5 Login to https://cloud.digitalocean.com/domains and select your droplet and type your domain name (e.g “yourdomain.com”) into the domain box and select your droplet

5.6 Configure the following DNS A Name records “@”-“XXX.XXX.XXX.XXX” where XXX is our server name and CName Records “www”-“www.yourdomain.com.” and “*”-“www.yourdomain.com.”

It can take from 24-48 hours for DNS to replicate around the world so I would suggest you goto bed at this stage: You can use https://www.whatsmydns.net/#A/yourdomain.com to check the DNS replication progress.

5.7 But if you are impatient check out the DNS replication around the world using this link: https://www.whatsmydns.net

fyi: The full name cheap DNS guide is here.

fyi: The Digital Ocean DNS guide is located here

Setup a SSL Certificate

You can skip section 6 to 6.17 and install a free SSL certificate if you wish (read this guide on using Lets Encrypt ).

Follow the rest of this guide if you want to buy an SSL cert from Namecheap (Comodo (Lets Encrypt is easier)).

6. Login to the Namecheap server.

6.1 Open your c9.io workspace to your domain

6.2 Click the Windows then New Terminal menu

6.3 Type: cd ~/.ssh/

6.4 openssl req -newkey rsa:2048 -nodes -keyout servername.key -out servername.csr

6.2 Type the following to generate CSR files  (my server is “servername.com”, replace this with your server name ).

# cd ~/.ssh
.ssh#

openssl req -newkey rsa:2048 -nodes -keyout servername.key -out servername.csr

Generating a 2048 bit RSA private key
.............................+++
............+++
writing new private key to 'servername.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:AU
State or Province Name (full name) [Some-State]:New South Wales
Locality Name (eg, city) []:Your City
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Fearby.com
Organizational Unit Name (eg, section) []:Developer
Common Name (e.g. server FQDN or YOUR name) []:servername.com
Email Address []: [email protected]

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:****************
string is too long, it needs to be less than  20 bytes long
A challenge password []:***************
An optional company name []:Your Nmae
~/.ssh# ls -al
total 20
drwx------ 2 root root 4096 Oct 17 10:20 .
drwx------ 7 root root 4096 Oct 17 10:17 ..
-rw------- 1 root root  399 Oct 17 08:06 authorized_keys
-rw-r--r-- 1 root root 1175 Oct 17 10:20 servername.csr
-rw-r--r-- 1 root root 1704 Oct 17 10:20 servername.key

6.3 Using the folder structure in c9.io browser to /root/.ssh/ and open the text file “servername.csr” and copy the file contents.

6.4 In a separate window go to https://ap.www.namecheap.com/ProductList/SslCertificates paste in the “” file contents and click Submit

6.5 Verify your details and click next

6.6 Next you will need to verify your domain by downloading and uploading a file to your server. Under “DCV Method” select “HTTP” and follow the prompts at name cheap to download the file.

6.7 Complete the Form (company contacts and click next).

6.8  Go to Certificate Details page to download the validation file. Or you can wait for the email with zip file attached.

fyi: the support forums for this certificate are https://support.comodo.com (but the site is rubbish, most pages load empty (e.g this one)).

6.9 Under “DCV Methods in Use” click ‘Edit Methods” then “Download File”

6.10 Using the c9.io interface upload the file to the /var/www/html folder (drag and drop)

6.11 Wait 1/2 hour and then go back to your name cheap dashboard and see if the certificate has been verified (it may take longer than that).

6.12 After a while a certificate will be issued, Unser See Details click Download Certificate.

6.13 Upload the certificate files (“weatherpanorama_link.ca-bundle”,”weatherpanorama_link.crt” and “servername.p7b” ) files using the c9.io IDE to /root/.ssh/

6.14 Add this “ServerName localhost” to “/etc/apache2/apache2.conf”.

6.16 In a c9.io terminal run this command “sudo nano /etc/hosts” and add this line “127.0.0.1 servername.com”

6.17 Run this command in a  c9.io terminal  ‘sudo a2enmod ssl”

fyi: Comodo support forums: https://support.comodo.com/index.php?/Default/Knowledgebase/List/Index/1

fyi: Comodo apache certificate installation instructions: https://support.comodo.com/index.php?/Default/Knowledgebase/Article/View/637/37/certificate-installation-apache–mod_ssl

Don’t forget to cache content to optimise your Web server

Security

Having a server introduces risks, do check your website often in https://www.shodan.io and see if it has open software or is known to hackers.
todo: SSL https://www.namecheap.com/support/knowledgebase/article.aspx/794/67/how-to-activate-ssl-certificate

Easily deploy an SSD cloud server on @DigitalOcean in 55 seconds. Sign up using my link and receive $10 in credit: https://wwww.digitalocean.com

end skip —

Seriously Lets Encrypt allows you to add an  SSL cert in minutes (over Comodo SSL certificates)

Donate and make this blog better


Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.7 added some more.

Filed Under: Cloud, Domain, Hosting, Linux, MySQL, Security, ssl, VM Tagged With: digital ocean, domain, namecheap, ssl

The quickest way to setup a scalable development ide and web server

June 8, 2015 by Simon Fearby

fyi: Consider reading this first (newer blog post):  How to buy a new domain and SSL cert from NameCheap, a Server from Digital Ocean and configure it.

Buying a Domain

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Why do I need a free Development IDE/VM

  • You already have heaps of sub domains/sites/blogs on one CPanel domain and you don’t want to slow down your server anymore.
  • You need a new collaboration web server setup in minutes.
  • You want a server where you have full control to install the latest widgets (NGNIX, NodeJS etc).
  • You want a single interface where you can deploy, develop and test online.
  • You want to save money
  • You want to access and edit your sites from anywhere.

The Solution

Cloud9 ( http://www.c9.io ) combines a powerful online code editor with a full Ubuntu server in the cloud. Simply pick your configuration, develop an app, invite others in to preview and help code.

Update 2018: For the best performing VM host (UpCloud) read my guide on the awesome UpCloud VM hosts (get $25 free credit by signing up here).

Now there is no need to spend valuable development time on setting up hardware/software platform. You can create, build and run almost any development stack in minutes. Cloud9 maintain the server and you have full control it.

Signing up for a C9 account.

Cloud 9 offer a number of hosting plans (one free) with a range of hardware resources for when you want to scale up.  The free tier is great if you want to keep your development environment closed.  Use this link and get $19 free credit https://c9.io/c/DLtakOtNcba

c92016

Before you connect to your digital ocean VM connect to the server via the console in the digital ocdan admin pane (you may need to reset your root password) and then install NodeJS (Required by c9.io IDE).

Installing NodeJS

  • curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash –
  • sudo apt-get install -y nodejs
  • node -v

Now you will have node v6.3.0

Create a development Workspace.

Once you create a Cloud 9 account you can create a VM workspace. You can choose some common software packages to installed by default.  Don’t worry you can install anything you want later from the command line in the VM.

c92016b

How simple is that, a new development environment in minutes.

Development Workspace

You can edit new code, play with WordPress or NodeJS all from the one Cloud9 IDE. The Cloud 9 IDE allows you to open a “bash terminal” tab, folder list, web browser, code window and debug tools (all from the web).

Code on the left, WordPress on the right, terminal on the bottom 🙂

Edit and View Code Workspace
Edit and View Code Workspace

C9 IDE

You can Install what you want

Because you have access to the Linux bash terminal you can for example type the following to install an NGNIX web server.

  1. sudo apt-get update
  2. sudo apt-get install nginx
  3. sudo service nginx start

Full Bash Terminal

Full Bash Terminal

As usual installing stuff in Linux requires loads of googling and editing config files so beware.

What are the downsides of a c9.io Ubuntu server?

Your development environment (public of private) is mostly off limits to the outside world unless you invite people in who have a Cloud 9 account.  This is great if you want to develop a customers website off the grid and keep is secure or share the development with other developers.  Cloud 9 don’t really have a “goto production plan” so you will need to find a host to deploy to when you are ready.

Luckily this is where http://www.digitalocean.com comes in, Digital Ocean allow you to create a real/public VM (just like Cloud 9) and best of all you can connect it to the Cloud 9 IDE..

The only downside is you will need to move on from the free Cloud 9 account and pay $9 a month to allow you to connect securely (via SSH) to your new (Real) Digital Ocean VM.  On the up side the $19 month plan gives you twice the ram (1GB) and 10x the storage (10GB) and you can have 2 premium (private accounts).

Signing up for a Digital OceanAccount

The cheapest Digital Ocean Hosting plan is $5 a month. If you want $10 free credit at Digital Ocean (two months free) please use this link: https://www.digitalocean.com/?refcode=99a5082b6de5

Tip:

Granting SSH Access (before you create a server (droplet))

Tip: Add your Cloud 9 SSH key to your account before creating a droplet (VM). I added my SSH key when the VM/Droplet was create and I could not connect to it from Cloud 9. I then deleted the droplet, added the SSH key to my Digital Ocean account here then created the Droplet (VM) and all was ok.  You can find your SSH key on the front page of your cloud 9 desktop.

do2016b

This is the magic option, if you skip this you will be emailed a password to your VM and you will be on your own connecting to it with a secure terminal window. If you add your Cloud 9 SSH key ( found in your Cloud 9 IDE https://cloud.digitalocean.com/settings/security ) you can connect to and control your new Digital Ocean VM from the Cloud 9 UI.

Now you can create a server (droplet)

do2016

A digital ocean server can be setup in minutes. If you only use it for 2 weeks you will only be charged for 2 weeks. If you use my link your first 2 months are free (if you select a $5 server).

Your server should be created in well under 5 minutes. Write down your VM’s IP.

Digital Ocean Droplet (VM) Created
Digital Ocean Droplet (VM) Created

Connecting your C9 account to Digital Ocean Droplet

Now go back to Cloud 9 and login. Go here ( https://c9.io/account/ssh ) and add your SSH key from Digital Ocean.

Cloud 9 guide on setting up SSH on your server: https://docs.c9.io/docs/running-your-own-ssh-workspace

Advertisement:



fyi: Here is a more recent post of how to connect Cloud 9 with AWS.

Create a new workspace with these settings (but use your IP from digital ocean) to connect to your new Digital Ocean VM.

c92016c

Now you can develop like a pro. Cloud 9 will allow you to login to your development environment from anywhere and resume where you left off.

Traps and Tips

  • Consider buying this course: https://www.udemy.com/all-about-nodejs/?dtcode=9TQkocT33Eck 
  • Get your VM/Droplets right (if they don’t work as expected delete them and start again).
  • Know how to safely shutdown a Linux VM.
  • Google.
  • If you receive the error “Could not execute node.js on [email protected] bash: /usr/bin/nodejs:” in C9 when connecting to the server try installing node via the digital oceans manual console window.

Connecting your new Cloud IP to a CPanel sub domain

If you have CPanel domain elsewhere you can link your new Digital Ocean Cloud VM IP to a new sub domain.

  1. Login to your CPanel domain UI.
  2. Click Simple DNS Zone Editor
  3. Type the sub domain name (swap my domain.com to your domain).
  4. Enter the IP for your Digital Ocean domain (you get this from the Digital Ocean account page).
  5. Click Add a record.

    DNS Zone
    DNS Zone
  6. Now when someone types http://newcloud.mydomain.com they get redirected to your new cloud domain but the URL stays the same (how professional is that).
  7. You can add multiple A name records pointing to the same IP.

Summary

$19 a month gives me a kick arse www.c9.io development environment and a few VMs.

$5 a month gives me my own real VM that I can scale up.

Coupon

You can easily deploy an SSD cloud server in 55 seconds for $5 a month. Sign up using my link and receive $10 in credit: https://www.digitalocean.com/?refcode=99a5082b6de5

Security

After a few weeks, do check your website with https://www.shodan.io and see if it has open software or is known to hackers.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]
V1.6 security

Filed Under: Cloud, Development, Domain, Hosting, Linux, Scalable, Security, VM Tagged With: cloud, cloud 9, code, development, digital ocean, ide, vm

Primary Sidebar

Poll

What would you like to see more posts about?
Results

Support this Blog

Create your own server today (support me by using these links

Create your own server on UpCloud here ($25 free credit).

Create your own server on Vultr here.

Create your own server on Digital Ocean here ($10 free credit).

Remember you can install the Runcloud server management dashboard here if you need DevOps help.

Advertisement:

Tags

2FA (9) Advice (17) Analytics (9) App (9) Apple (10) AWS (9) Backup (21) Business (8) CDN (8) Cloud (49) Cloudflare (8) Code (8) Development (26) Digital Ocean (13) DNS (11) Domain (27) Firewall (12) Git (7) Hosting (18) HTTPS (6) IoT (9) LetsEncrypt (7) Linux (20) Marketing (11) MySQL (24) NGINX (11) NodeJS (11) OS (10) PHP (13) Scalability (12) Scalable (14) Security (44) SEO (7) Server (26) Software (7) SSH (7) ssl (17) Tech Advice (9) Ubuntu (39) Uncategorized (23) UpCloud (12) VM (44) Vultr (24) Website (14) Wordpress (25)

Disclaimer

Terms And Conditions Of Use All content provided on this "www.fearby.com" blog is for informational purposes only. Views are his own and not his employers. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. Never make changes to a live site without backing it up first.

Advertisement:

Footer

Popular

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Add Google AdWords to your WordPress blog

Security

  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • Setting up DNSSEC on a Namecheap domain hosted on UpCloud using CloudFlare
  • Set up Feature-Policy, Referrer-Policy and Content Security Policy headers in Nginx
  • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
  • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
  • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
  • Beyond SSL with Content Security Policy, Public Key Pinning etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Run an Ubuntu VM system audit with Lynis
  • Securing Ubuntu in the cloud
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider

Code

  • How to code PHP on your localhost and deploy to the cloud via SFTP with PHPStorm by Jet Brains
  • Useful Java FX Code I use in a project using IntelliJ IDEA and jdk1.8.0_161.jdk
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider
  • How to setup PHP FPM on demand child workers in PHP 7.x to increase website traffic
  • Installing Android Studio 3 and creating your first Kotlin Android App
  • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
  • How to use Sublime Text editor locally to edit code files on a remote server via SSH
  • Creating your first Java FX app and using the Gluon Scene Builder in the IntelliJ IDEA IDE
  • Deploying nodejs apps in the background and monitoring them with PM2 from keymetrics.io

Tech

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Is OSX Mojave on a 2014 MacBook Pro slower or faster than High Sierra
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • The case of the overheating Mac Book Pro and Occam’s Razor
  • Useful Linux Terminal Commands
  • Useful OSX Terminal Commands
  • Useful Linux Terminal Commands
  • What is the difference between 2D, 3D, 360 Video, AR, AR2D, AR3D, MR, VR and HR?
  • Application scalability on a budget (my journey)
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.

Wordpress

  • Replacing Google Analytics with Piwik/Matomo for a locally hosted privacy focused open source analytics solution
  • Setting web push notifications in WordPress with OneSignal
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Wordfence Security Plugin for WordPress
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
  • Moving WordPress to a new self managed server away from CPanel
  • Moving WordPress to a new self managed server away from CPanel

General

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Using the WinSCP Client on Windows to transfer files to and from a Linux server over SFTP
  • Connecting to a server via SSH with Putty
  • Setting web push notifications in WordPress with OneSignal
  • Infographic: So you have an idea for an app
  • Restoring lost files on a Windows FAT, FAT32, NTFS or Linux EXT, Linux XFS volume with iRecover from diydatarecovery.nl
  • Building faster web apps with google tools and exceed user expectations
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..

Copyright © 2023 · News Pro on Genesis Framework · WordPress · Log in

Some ads on this site use cookies. You can opt-out if of local analytics tracking by scrolling to the bottom of the front page or any article and clicking "You are not opted out. Click here to opt out.". Accept Reject Read More
GDPR, Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT