• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
  • Skip to footer
  • Home
  • Create a VM ($25 Credit)
  • Buy a Domain
  • 1 Month free Back Blaze Backup
  • Other Deals
    • Domain Email
    • Nixstats Server Monitoring
    • ewww.io Auto WordPress Image Resizing and Acceleration
  • About
  • Links

IoT, Code, Security, Server Stuff etc

Views are my own and not my employer's.

Personal Development Blog...

Coding for fun since 1996, Learn by doing and sharing.

Buy a domain name, then create your own server (get $25 free credit)

View all of my posts.

  • Cloud
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to buy a new domain and SSL cert from NameCheap, a Server from Digital Ocean and configure it.
    • Setting up a Vultr VM and configuring it
    • All Cloud Articles
  • Dev
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • How to setup pooled MySQL connections in Node JS that don’t disconnect
    • NodeJS code to handle App logins via API (using MySQL connection pools (1000 connections) and query parameters)
    • Infographic: So you have an idea for an app
    • All Development Articles
  • MySQL
    • Using the free Adminer GUI for MySQL on your website
    • All MySQL Articles
  • Perf
    • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Measuring VM performance (CPU, Disk, Latency, Concurrent Users etc) on Ubuntu and comparing Vultr, Digital Ocean and UpCloud – Part 1 of 4
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Setting up a website to use Cloudflare on a VM hosted on Vultr and Namecheap
    • All Performance Articles
  • Sec
    • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
    • Using OWASP ZAP GUI to scan your Applications for security issues
    • Setting up the Debian Kali Linux distro to perform penetration testing of your systems
    • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
    • PHP implementation to check a password exposure level with Troy Hunt’s pwnedpasswords API
    • Setting strong SSL cryptographic protocols and ciphers on Ubuntu and NGINX
    • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
    • All Security Articles
  • Server
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All Server Articles
  • Ubuntu
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • Useful Linux Terminal Commands
    • All Ubuntu Articles
  • VM
    • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
    • All VM Articles
  • WordPress
    • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
    • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
    • How to backup WordPress on a host that has CPanel
    • Moving WordPress to a new self managed server away from CPanel
    • Moving a CPanel domain with email to a self managed VPS and Gmail
    • All WordPress Articles
  • All

Hosting

Connecting to an AWS EC2 Ubuntu instance with Cloud 9 IDE as user ubuntu and root

September 1, 2016 by Simon Fearby

Recently I setup an Amazon EC2 Ubuntu Server instance and wanted to connect it to the awesome Cloud 9 IDE. I was sick of interacting with a server through terminal windows.

Use this link and get $19 free credit with Cloud 9: https://c9.io/c/DLtakOtNcba

c9io15-004

Cloud 9 IDE (sample screenshot)

C9 IDE

Previously I was using Digital Ocean (my Digital Ocean setup guide here) and this was simple, you get a VM and you have a root account and you do what you want.  Amazon AWS however, have extra layers of security that prevent logging in as root via SSH and that can be a pain with Cloud 9 as your workspace tree is restricted to the ~/ (home) folder.

Below are the steps you need to connect to an AWS instance with user “ubuntu” and “root” with Cloud 9.

Connecting to an AWS instance with Cloud 9 as user “ubuntu”

1. Purchase and set-up your AWS instance (my guide here).

2. You need to be able to login to your AWS server from a terminal prompt (from OSX).  This may include opening port 22 the AWS Security Group panel. Info on SSH logins here.

ssh -i ~/.ssh/yourawsicskeypair.pem [email protected]

3. On your AWS server (from step 2) Install NodeJS.

You will know node is installed if you get a version returned when typing the following bash command.

node -v

tip: If node is not installed you can run the Cloud 9 pre-requisites script (that includes node).

curl -L https://raw.githubusercontent.com/c9/install/master/install.sh | bash

4. Ensure you have created SSH key on Cloud 9 (guide here).

5. Copy your Cloud 9 SSH key to the clipboard.

6. On your AWS server (in step 2) edit the ~/.ssh/authorized_keys file and paste in the Cloud 9 SSH key (after you AWS key pair that was added from the setup of AWS) to a new line and save the file.

7. Log in to Cloud 9 and click create Workspace then Remote SSH Workspace.

  • Name your workspace (all lowercase and no spaces).
  • Username: ubuntu
  • Hostname: Add your AWS ec2 server hostname.
  • Initial Path: This has to be ~/

c9io15-000

8. Click Create Workspace

c9io15-002

9. If all goes well you will have a prompt to install the prerequisites.

c9io15-001

If this fails check out the Cloud 9 guide here.

Troubleshooting: I had errors like “Project directory does not exist or is not writable and “Unable to change File System Path in SSH Workspace” because I was trying to set the workspace path as “/” (this is not possible on AWS with the “ubuntu” account.

10. Now you should have a web-based IDE that allows you to browse your server, create and edit files, run termials instances that will reconnect if your net connection or browser tab drops out (you can even go to a different machine and continue with your session).

c9io15-003

Connecting to an AWS instance with Cloud 9 as user “root

Connecting to your server as the “ubuntu” server is fine if you just need to work in your “ubuntu” home folder.  As soon as you want to start changing other settings outside of your home folder you are stuck.  Granting “ubuntu” higher privileges server wide is a bad idea so here is how you can enable “root” login via SSH access.

WARNING: Logging in as ROOT IS BAD, you should only allow Root Login for short periods and it is advisable to remove root login abilities as soon as you do not need them or in production.

Having root access while developing or building a new server saves me bucket loads of time so lets allow it.

1. Follow step 1 to 5 in the steps above (setup AWS, ssh access via terminal, install node, create cloud 9 ssh key, copy the cloud 9 ssh key to the clipboard).

2. SSH to your AWS server and edit the following file:

sudo nano /etc/ssh/sshd_config
# -- Make the following changes
# PermitRootLogin without-password
PermitRootLogin = yes

Save.

3. Backup your root authorised keys file

sudo cp /root/.ssh/authorized_keys /root/.ssh/authorized_keys.bak

4. Edit the root authorized_keys file and paste in your Cloud 9 SSH Key.

c9io15-005

5. Now you can create a Cloud 9 Connection to your server with root

  • Name your workspace (all lowercase and no spaces).
  • Username: root
  • Hostname: Add your AWS ec2 server hostname.
  • Initial Path: This has to be /

c9io15-007

tip:  If you have not added you SSH key correctly you will receive this error when connecting.

c9io15-006

6. You should now be able to connect to AWS ec2 instances with Cloud 9 as root and configure/do anything you want without switching to shell windows.

c9io15-009

Security

As a precaution, do check your website often in https://www.shodan.io and see if it has open software or is known to hackers.
Enjoy

If this guide has helped please consider donating a few dollars.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

V1.6 security

Filed Under: Cloud, Domain, Hosting, Linux, NodeJS, Security, ssl Tagged With: AWS, c9, cloid, ssh, terminal

Creating an AWS EC2 Ubuntu 14.04 server with NGINX, Node and MySQL and phpMyAdmin

August 16, 2016 by Simon Fearby

This blog lists the actions I went through to setup an AWS EC2 Ubuntu Server and add the usual applications. This follows on from my guide to setup a Digital Ocean server server guide and Vultr setup guide.

Variable Pricing

Amazon Web Services give developers 12 months of free access to a range of products. Amazon don’ t have flat rate fees like Digital Ocean instead, AWS grant you a minimum level of CPU credits for each server type. A “t2.micro” (1CPU/1GB ram $0.013c/hour server) gets 6 CPU credits an hour.  That is enough for a CPU to run at 10% all the time, you can bank up to 144 CPU credits that you can use when the application spikes.  The “t2.micro” is the free tier server (costs nothing for 12 months), but when the trial runs our that’s $9.50 a month.  The next server up is a “t2.small” (1 CPU, 2GB ram) and you get 12 CCPUredits and hour and can bank 288, that enough for 20% CPU usage all the time.  

The “t2.medium” server (2 CPU’s, 4GB Ram), allows 40% CPU usage credits, 24 CPU credits an hour with 576 bankable.  That server costs $0.052c and hour and is $38 a month. Don’t forget to cache content. 

I used about 40 CPU credits generating a 4096bit secure prime Diffie-Hellman key for an SSL Certificate on my t2.micro server. More Information on AWS Instance pricing here and here.

Creating an AWS Account (Free Trial)

The signup process is simple.

  1. Create a free AWS account.
  2. Enter your CC details (for any non-free services) and submit your account.

It will take 2 days or more for Amazon to manually approve your account.  When you have been approved, navigate to https://console.aws.amazon.com login and set your region in the top right next to your name (in my case I will go with Australia ‘ap-southeast-2‘).

My console home is now: https://ap-southeast-2.console.aws.amazon.com/ec2/v2/home?region=ap-southeast-2#LaunchInstanceWizard

Create a Server

You can follow the prompts to set up a free tier EC2 Ubuntu server here.

1. Choose Ubuntu EC2

2. Choose Instance Type: t2-micro (1x CPU, 1GB Ram)

3. Configure Instance: 1

4. Add Storage: /dev/sda1, 8GB+, 10-3000 IOPS

5. Tag Instance: Your own role specific tags

6. Configure Security Group: Default firewall rules.

7. Review

Tip: Create a 25GB volume (instead of 8GB) or you will need to add an extra volume mount it with the following commands.

sudo nano /etc/fstab
(append the following line)
/dev/xvdf    /mydata    ext4    defaults,nofail    0    2
sudo mkfs -t ext4 /dev/xvdf
sudo mkdir /mydata
mount -a
cd /mydata
ls -al

Part of theEC2 server setup was to save a .PEM file to your SSH folder on your local PC ( ~/.ssh/mysererkeypair.pem).

You will need to secure the file:

chmod 400 ~/.ssh/mysererkeypair.pem

Before we connect to the server we need to configure the firewall here in the Amazon Console.

Type Protocol Port Range Source Comment
HTTP TCP 80 0.0.0.0/0 Opens a web server port for later
All ICMP ALL N/A 0.0.0.0/0 Allows you to ping
All traffic ALL All 0.0.0.0/0 Not advisable long term but OK for testing today.
SSH TCP 22 0.0.0.0/0 Not advisable, try and limit this to known IP’s only.
HTTPS TCP 443 0.0.0.0/0 Opens a secure web server port for later

DNS

You will need to assign a status IP to your server (apparently the public IP is not static). Here is a good read on connecting a domain name to the IP and assigning an elastic IP to your server.  Once you have assigned an elastic IP you can point your domain to your instance.

AWS IP

Installing the Amazon Command Line Interface utils on your local PC

This is required to see your servers console screen and then connect via SSH.

curl "https://s3.amazonaws.com/aws-cli/awscli-bundle.zip" -o "awscli-bundle.zip"
unzip awscli-bundle.zip
sudo ./awscli-bundle/install -i /usr/local/aws -b /usr/local/bin/aws
/usr/local/bin/aws --version

You now need to configure your AWS CLI, first generate Access Keys here. While you are there setup Multi-Factor Authentication with Google Authenticator.

aws configure
> AWS Access Key ID: {You AWS API Key}
> AWS Secret Access Key: {You AWS Secret Accss Key}
> Default region name: ap-southeast-2 
> Default output format: text

Once you have configured your CLI you can connect and review your Ubuntu console output (the instance ID can be found in your EC2 server list).

aws ec2 get-console-output --instance-id i-123456789

Now you can hopefully connect to your server and accept any certificates to finish the connection.

ssh -i ~/.ssh/myserverpair.pem [email protected]

AWS Console

Success, I can now access my AWS Instance.

Setting the Time and Daylight Savings.

Check your time.

sudo hwclock --show
> Fri 21 Oct 2016 11:58:44 PM AEDT  -0.814403 seconds

My Daylight savings have not kicked in.

Install ntp service

sudo apt install ntp

Set your Timezone

dpkg-reconfigure tzdata

Go to http://www.pool.ntp.org/zone/au and find my NTP server (or go here if you are outside Australia)

server 0.au.pool.ntp.org
server 1.au.pool.ntp.org
server 2.au.pool.ntp.org
server 3.au.pool.ntp.org

Add the NTP servers to “/etc/ntp.conf” and restart he NTP service.

sudo service ntp restart

Now check your time again and you should have the right time.

sudo hwclock --show
> Fri 21 Oct 2016 11:07:38 PM AEDT  -0.966273 seconds

🙂

Installing NGINX

I am going to be installing the latest v1.11.1 mainline development (non-legacy version). Beware of bugs and breaking changes here.

sudo add-apt-repository ppa:chris-lea/nginx-devel
sudo apt-get update
sudo apt-get install nginx
sudo service nginx start
nginx -v

NGINX is now installed. Try and get to your domain via port 80 (if it fails to load, check your firewall).

Installing NodeJS

Here is how you can install the latest NGINX (development build), beware of bugs and frequent changes. Read the API docs here.

curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash -
sudo apt-get install -y nodejs
nodejs -v

NodeJS is installed.

Installing MySQL

sudo apt-get install mysql-common
sudo apt-get install mysql-server
mysql --version
sudo mysql_install_db
sudo mysql_secure_installation
service mysql status

Install PHP 7.x and PHP7.0-FPM

I am going to install PHP 7 due to the speed improvements over 5.x.  Below were the commands I entered to install PHP (thanks to this guide)

sudo add-apt-repository ppa:ondrej/php
sudo apt-get install -y language-pack-en-base
sudo LC_ALL=en_US.UTF-8 add-apt-repository ppa:ondrej/php
sudo apt-get update
sudo apt-get install php7.0
sudo apt-get install php7.0-mysql
sudo apt-get install php7.0-fpm
sudo nano /etc/php/7.0/fpm/php.ini
> edit: cgi.fix_pathinfo=0
sudo service php7.0-fpm restart	
service php7.0-fpm status

Now install misc helper modules into php 7 (thanks to this guide)

apt-get install php-xdebug
sudo apt-get install php7.0-phpdbg php7.0-mbstring php7.0-gd php7.0-imap 
sudo apt-get install php7.0-ldap php7.0-pgsql php7.0-pspell php7.0-recode 
sudo apt-get install php7.0-snmp php7.0-tidy php7.0-dev php7.0-intl 
sudo apt-get install php7.0-gd php7.0-curl php7.0-zip php7.0-xml
sudo nginx –s reload
sudo /etc/init.d/nginx restart
sudo service php7.0-fpm restart
php -v

NGINX Configuration

NGINX can be a bit tricky to setup for newbies and your configuration will certainly be different but here is mine (so far):

File: /etc/nginx/nginx.conf

user www-data;
worker_processes auto;
worker_cpu_affinity auto;
pid /var/run/nginx.pid;
events {
        worker_connections 1024;
        multi_accept on;
}
http {
        sendfile on;
        tcp_nopush on;
        tcp_nodelay on;
        keepalive_timeout 65;
        types_hash_max_size 2048;
        # server_tokens off;
        # server_names_hash_bucket_size 64;
        # server_name_in_redirect off;
        client_body_buffer_size      128k;
        client_max_body_size         10m;
        client_header_buffer_size    1k;
        large_client_header_buffers  4 4k;
        output_buffers               1 32k;
        postpone_output              1460;

        proxy_headers_hash_max_size 2048;
        proxy_headers_hash_bucket_size 512;

        client_header_timeout  1m;
        client_body_timeout    1m;
        send_timeout           1m;

        include /etc/nginx/mime.types;
        default_type application/octet-stream;

        # ssl_protocols TLSv1 TLSv1.1 TLSv1.2; # Dropping SSLv3, ref: POODLE
        # ssl_prefer_server_ciphers on;

        access_log /var/log/nginx/access.log;
        error_log /var/log/nginx/error.log;

        gzip on;
	gzip_disable "msie6";
	gzip_vary on;
	gzip_proxied any;
	gzip_comp_level 6;
	gzip_buffers 16 8k;
	gzip_http_version 1.1;
	gzip_min_length 256;
	gzip_types text/plain text/css application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript application/vnd.ms-fontobject application/x-font-ttf font/opentype image/svg+xml image/x-icon;

        include /etc/nginx/conf.d/*.conf;
        include /etc/nginx/sites-enabled/*;
}

File: /etc/nginx/sites-available/default

proxy_cache_path /tmp/nginx-cache keys_zone=one:10m;

server {
        # listen [::]:80 default_server ipv6only=on; ## listen for ipv6

        access_log /var/log/nginx/myservername.com.log;

        root /usr/share/nginx/www;
        index index.php index.html index.htm;

        server_name www.myservername.com myservername.com localhost;

        # ssl on;
        # ssl_certificate /etc/nginx/ssl/cert_chain.crt;
        # ssl_certificate_key /etc/nginx/ssl/myservername.key;
        # ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";              # disable some old ciphers
        # ssl_prefer_server_ciphers on;
        # ssl_dhparam /etc/nginx/ssl/dhparams.pem;
        # ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
        # server_tokens off;
        # ssl_session_cache shared:SSL:40m;                                           # More info: http://nginx.com/blog/improve-seo-https-nginx/
        # Set SSL caching and storage/timeout values:
        # ssl_session_timeout 4h;
        # ssl_session_tickets off; # Requires nginx >= 1.5.9
        # OCSP (Online Certificate Status Protocol) is a protocol for checking if a SSL certificate has been revoked
        # ssl_stapling on; # Requires nginx >= 1.3.7
        # ssl_stapling_verify on; # Requires nginx => 1.3.7
        # add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";

        # add_header X-Frame-Options DENY;                                            # Prevent Clickjacking

        # Prevent MIME Sniffing
        # add_header X-Content-Type-Options nosniff;


        # Use Google DNS
        # resolver 8.8.8.8 8.8.4.4 valid=300s;
        # resolver_timeout 1m;

        # This is handled with the header above.
        #rewrite ^/(.*) https://myservername.com/$1 permanent;

        location / {
                try_files $uri $uri/ =404;
                index index.php index.html index.htm;
                proxy_set_header Proxy "";
        }

        fastcgi_param PHP_VALUE "memory_limit = 512M";

        # pass the PHP scripts to FastCGI server listening on 127.0.0.1:9000
        location ~ .php$ {
                try_files $uri =404;

                # include snippets/fastcgi-php.conf;

                fastcgi_split_path_info ^(.+.php)(/.+)$;
                fastcgi_index index.php;
                fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
                include fastcgi_params;
                fastcgi_pass unix:/var/run/php/php7.0-fpm.sock;

                # NOTE: You should have "cgi.fix_pathinfo = 0;" in php.ini
                # With php5-cgi alone:
                # fastcgi_pass 127.0.0.1:9000;
        }

        # deny access to .htaccess files, if Apache's document root
        #location ~ /.ht {
        #       deny all;
        #}
}

Test and Reload NGINX Config

sudo nginx -t
sudo nginx -s reload
sudo /etc/init.d/nginx restart

Don’t forget to test PHP with a script that calls ‘phpinfo()’.

Install PhpMyAdmin

Here is how you can install the latest branch of phpmyadmin into NGINX (no apache)

cd /usr/share/nginx/www
mkdir my/secure/secure/folder
cd my/secure/secure/folder
sudo apt-get install git
git clone --depth=1 --branch=STABLE https://github.com/phpmyadmin/phpmyadmin.git

If you need to import databases into MySQL you will need to enable file uploads in PHP and set file upload limits.  Review this guide to enable uploads in phpMyAdmin.  Also if your database is large you may also need to change the “client_max_body_size” settings on nginx.conf ( see guide here ). Don’t forget to disable uploads and reduce size limits in NGINX and PHP when you have uploaded databases.

Note: phpMyAdmin can be a pain to install so don’t be afrait of using an alternative management gui.  Here is a good list of MySQL management interfaces. Also check your OS App store for native mysql database management clients.

Install an FTP Server

Follow this guide here then..

sudo nano /etc/vsftpd.conf
write_enable=YES
sudo service vsftpd restart

Install: oracle-java8 (using this guide)

sudo add-apt-repository ppa:webupd8team/java
sudo apt-get update
sudo apt-get install oracle-java8-installer
cd /usr/lib/jvm/java-8-oracle
ls -al
sudo nano /etc/environment
> append "JAVA_HOME="/usr/lib/jvm/java-8-oracle""
echo $JAVA_HOME
sudo update-alternatives --config java

Install: ncdu – Interactive tree based folder usage utility

sudo apt-get install ncdu
sudo ncdu /

Install: pydf – better quick disk check tool

sudo apt-get install pydf
sudo pydf

Install: rcconf – display startup processes (handy when confirming pm2 was running).

sudo apt-get install rcconf
sudo rcconf

I started “php7.0-fpm” as it was not starting on boot.

I had an issue where PM2 was not starting up at server reboot and reporting to https://app.keymetrics.io.  I ended up repairing the /etc/init.d/pm2-init.sh as mentioned here.

sudo nano /etc/init.d/pm2-init.sh
# edit start function to look like this
...
start() {
    echo "Starting $NAME"
    export PM2_HOME="/home/ubuntu/.pm2" # <== add this line
    super $PM2 resurrect#
}
...

Install IpTraf – Network Packet Monitor

sudo apt-get install iptraf
sudo iptraf

Install JQ– JSON Command Line Utility

sudo apt-get install jq
# Downlaod and display json file with jq
curl 'https://api.github.com/repos/stedolan/jq/commits?per_page=5' | jq .

Install Ruby – Below commands a bit out of order due to some command not working for unknown reasons.

sudo apt-get update
sudo apt-get install git-core curl zlib1g-dev build-essential libssl-dev libreadline-dev libyaml-dev libsqlite3-dev sqlite3 libxml2-dev libxslt1-dev libcurl4-openssl-dev python-software-properties libffi-dev
sudo gem install bundler
sudo git clone git://github.com/sstephenson/rbenv.git ~/.rbenv
sudo git clone git://github.com/sstephenson/ruby-build.git ~/.rbenv/plugins/ruby-build
sudo git clone https://github.com/sstephenson/rbenv-gem-rehash.git ~/.rbenv/plugins/rbenv-gem-rehash
sudo echo 'export PATH="$HOME/.rbenv/bin:$PATH"' >> ~/.bashrc
sudo echo 'eval "$(rbenv init -)"' >> ~/.bashrc
sudo exec $SHELL
sudo rbenv install 2.2.3
sudo rbenv global 2.2.3
sudo rbenv rehash
ruby -y
sudo apt-get install ruby-dev

Install Twitter CLI – https://github.com/sferik/t

sudo gem install t
sudo t authorize# follow the prompts

Mutt (send mail by command line utility)

Help site: https://wiki.ubuntu.com/Mutt

sudo nano /etc/hosts
127.0.0.1 localhost localhost.localdomain xxx.xxx.xxx.xxx yourdomain.com yourdomain

Configuration:

sudo apt-get install mutt
[configuration none]
sudo nano /etc/postfix/main.cf
[setup a]

Configure postfix guide here

Extend the History commands history

I love the history command and here is how you can expand it’s hsitory and ignore duplicates.

HISTSIZE=10000
HISTCONTROL=ignoredups

Don’t forget to check your servers IP with www.shodan.io to ensure there are no back doors.

Cont…

Next: I will add an SSL cert, lock down the server and setup Node Proxies.

If this guide was helpful please consider donating a few dollars to keep me caffeinated.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.61 added vultr guide link

Filed Under: Cloud, Development, Domain, Hosting, MySQL, NodeJS, Security Tagged With: AWS, server

Application scalability on a budget (my journey)

August 12, 2016 by Simon Fearby

If you have read my other guides on https://www.fearby.com you may tell I like the self-managed Ubuntu servers you can buy from Digital Ocean for as low as $5 a month  (click here to get $10 in free credit and start your server in 5 minutes ). Vultr has servers as low as $2.5 a month. Digital Ocean is a great place to start up your own server in the cloud, install some software and deploy some web apps or backend (API/databases/content) for mobile apps or services.  If you need more memory, processor cores or hard drive storage simple shutdown your Digital Ocean server, click a few options to increase your server resources and you are good to go (this is called “scaling up“). Don’t forget to cache content to limit usage.

This scalability guide is a work in progress (watch this space). My aim is to get 2000 concurrent users a second serving geo queries (like PokeMon Go) for under $80 a month (1x server and 1x mongoDB cluster).  Currently serving 600~1200/sec.

Buying a Domain

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Estimating Costs

If you don’t estimate costs you are planning to fail.

"By failing to prepare you are preparing to fail." - Benjamin Frankin

Estimate the minimum users you need to remain viable and then the expected maximum uses you need to handle. What will this cost?

Planning for success

Anyone who has researched application scalability has come across articles on apps that have launched and crashed under load at launch.  Even governments can spend tens of millions on developing a scalable solution, plan for years and fail dismally on launch (check out the Australian Census disaster).  The Australian government contracted IBM to develop a solution to receive up to 15 million census submissions between the 28th of July to the 5th of September. IBM designed a system and a third party performance test planned up to 400 submissions a second but the maximum submissions received on census night before the system crashed was only o154 submissions a second. Predicting application usage can be hard, in the case of the Australian census the bulk of people logged on to submit census data on the recommended night of the 9th of August 2016.

Sticking to a budget

This guide is not for people with deep pockets wanting to deploy a service to 15 million people but for solo app developers or small non-funded startups on a serious budget.  If you want a very reliable scalable solution or service provider you may want to skip this article and check out services by the following vendors.

  • Firebase
  • Azure (good guides by Troy Hunt: here, here and here).
  • Amazon Web Services
  • Google Cloud
  • NGINX Plus

The above vendors have what seems like an infinite array of products and services that can form part of your solution but beware, the more products you use the more complex it will be and the higher the costs.  A popular app can be an expensive app. That’s why I like Digital Ocean as you don’t need a degree to predict and plan you servers average usage and buy extra resource credits if you go over predicted limits. With Digital Ocean you buy a virtual server and you get known Memory, Storage and Data transfer limits.

Let’s go over topics that you will need to consider when designing or building a scalable app on a budget.

Application Design

Your application needs will ultimately decide the technology and servers you require.

  • A simple business app that shares events, products and contacts would require a basic server and MySQL database.
  • A turn-based multiplayer app for a few hundred people would require more server resources and endpoints (a NGINX, NODEJS and an optimized MySQL database would be ok).
  • A larger augmented reality app for thousands of people would require a mix of databases and servers to separate the workload (a NGINX webserver and NodeJS powered API talking to a MySQL database to handle logins and a single server NoSQL database for the bulk of the shared data).
  • An augmented reality app with tens of thousands of users (a NGINX web server, NodeJS powered API talking to a MySQL database to handle logins and NoSQL cluster for the bulk of the shared data).
  • A business critical multi-user application with real-time chat – are you sure you are on a budget as this will require a full solution from Azure Firebase or Amazon Web Serers.

A native app, hybrid app or full web app can drastically change how your application works ( learn the difference here ).

Location, location, location.

You want your server and resources to be as close to your customers as possible, this is one rule that cannot be broken. If you need to spend more money to buy a server in a location closer to your customers do it.

My Setup

I have a Digital Ocean server with 2 cores and 2GB of ram in Singapore that I use to test and develop apps. That one server has MySQL, NGINX, NodeJS , PHP and many scripts running on it in the background.  I also have a MongoDB cluster (3 servers) running on AWS in Sydney via MongoDB.com.  I looked into CouchDB via Cloudant but needed the Geo JSON features with fair dedicated pricing. I am considering moving to an Ubuntu server off Digital Ocean (in Singapore) and onto AWS server (in Sydney). I am using promise based NodeJS calls where possible to prevent non blocking calls to the operating system, database or web.  Update: I moved to a Vultr domain (article here)

Here is a benchmark for HTTP and HTTPS request from Rural NSW to Sydney Australia, then Melbourne, then Adelaide the Perth then Singapore to a Node Server on an NGINX Server that does a call back to Sydney Australia to get a GeoQuery from a large database and return to back to the customer via Singapore.

SSL

SSL will add processing overheads and latency period.

Here is a breakdown of the hops from my desktop in Regional NSW making a network call to my Digital Ocean Server in Singapore (with private parts redacted or masked).

traceroute to destination-server-redacted.com (###.###.###.##), 64 hops max, 52 byte packets
 1  192-168-1-1 (192.168.1.1)  11.034 ms  6.180 ms  2.169 ms
 2  xx.xx.xx.xxx.isp.com.au (xx.xx.xx.xxx)  32.396 ms  37.118 ms  33.749 ms
 3  xxx-xxx-xxx-xxx (xxx.xxx.xxx.xxx)  40.676 ms  63.648 ms  28.446 ms
 4  syd-gls-har-wgw1-be-100 (203.221.3.7)  38.736 ms  38.549 ms  29.584 ms
 5  203-219-107-198.static.tpgi.com.au (203.219.107.198)  27.980 ms  38.568 ms  43.879 ms
 6  tengige0-3-0-19.chw-edge901.sydney.telstra.net (139.130.209.229)  30.304 ms  35.090 ms  43.836 ms
 7  bundle-ether13.chw-core10.sydney.telstra.net (203.50.11.98)  29.477 ms  28.705 ms  40.764 ms
 8  bundle-ether8.exi-core10.melbourne.telstra.net (203.50.11.125)  41.885 ms  50.211 ms  45.917 ms
 9  bundle-ether5.way-core4.adelaide.telstra.net (203.50.11.92)  66.795 ms  59.570 ms  59.084 ms
10  bundle-ether5.pie-core1.perth.telstra.net (203.50.11.17)  90.671 ms  91.315 ms  89.123 ms
11  203.50.9.2 (203.50.9.2) 80.295 ms  82.578 ms  85.224 ms
12  i-0-0-1-0.skdi-core01.bx.telstraglobal.net (Singapore) (202.84.143.2)  132.445 ms  129.205 ms  147.320 ms
13  i-0-1-0-0.istt04.bi.telstraglobal.net (202.84.243.2)  156.488 ms
    202.84.244.42 (202.84.244.42)  161.982 ms
    i-0-0-0-4.istt04.bi.telstraglobal.net (202.84.243.110)  160.952 ms
14  unknown.telstraglobal.net (202.127.73.138)  155.392 ms  152.938 ms  197.915 ms
15  * * *
16  destination-server-redacted.com (xx.xx.xx.xxx)  177.883 ms  158.938 ms  153.433 ms

160ms to send a request to the server.  This is on a good day when the Netflix Effect is not killing links across Australia.

Here is the route for a call from the server above to the MongoDB Cluster on an Amazon Web Services in Sydney from the Digital Ocean Server in Singapore.

traceroute to redactedname-shard-00-00-nvjmn.mongodb.net (##.##.##.##), 30 hops max, 60 byte packets
 1  ###.###.###.### (###.###.###.###)  0.475 ms ###.###.###.### (###.###.###.###)  0.494 ms ###.###.###.### (###.###.###.###)  0.405 ms
 2  138.197.250.212 (138.197.250.212)  0.367 ms 138.197.250.216 (138.197.250.216)  0.392 ms  0.377 ms
 3  unknown.telstraglobal.net (202.127.73.137)  1.460 ms 138.197.250.201 (138.197.250.201)  0.283 ms unknown.telstraglobal.net (202.127.73.137)  1.456 ms
 4  i-0-2-0-10.istt-core02.bi.telstraglobal.net (202.84.225.222)  1.338 ms i-0-4-0-0.istt-core02.bi.telstraglobal.net (202.84.225.233)  3.817 ms unknown.telstraglobal.net (202.127.73.137)  1.443 ms
 5  i-0-2-0-9.istt-core02.bi.telstraglobal.net (202.84.225.218)  1.270 ms i-0-1-0-0.pthw-core01.bx.telstraglobal.net (202.84.141.157)  50.869 ms i-0-0-0-0.pthw-core01.bx.telstraglobal.net (202.84.141.153)  49.789 ms
 6  i-0-1-0-5.sydp-core03.bi.telstraglobal.net (202.84.143.145)  107.395 ms  108.350 ms  105.924 ms
 7  i-0-1-0-5.sydp-core03.bi.telstraglobal.net (202.84.143.145)  105.911 ms 21459.tauc01.cu.telstraglobal.net (134.159.124.85)  108.258 ms  107.337 ms
 8  21459.tauc01.cu.telstraglobal.net (134.159.124.85)  107.330 ms unknown.telstraglobal.net (134.159.124.86)  101.459 ms  102.337 ms
 9  * unknown.telstraglobal.net (134.159.124.86)  102.324 ms  102.314 ms
10  * * *
11  54.240.192.107 (54.240.192.107)  103.016 ms  103.892 ms  105.157 ms
12  * * 54.240.192.107 (54.240.192.107)  103.843 ms
13  * * *
14  * * *
15  * * *
16  * * *
17  * * *
18  * * *
19  * * *
20  * * *
21  * * *
22  * * *
23  * * *
24  * * *
25  * * *
26  * * *
27  * * *
28  * * *
29  * * *
30  * * *

It appears Telstra Global or AWS block the tracking of network path closer to the destination so I will ping to see how long the trip takes

bytes from ec2-##-##-##-##.ap-southeast-2.compute.amazonaws.com (##.##.##.##): icmp_seq=1 ttl=50 time=103 ms

It is obvious the longest part of the response to the client is not the GeoQuery on the MongoDB cluster or processing in NodeJS but the travel time for the packet and securing the packet.

My server locations are not optimal, I cannot move the AWS MongoDB to Singapore because MongoDB doesn’t have servers in Singapore and Digital Ocean don’t have servers in Sydney.  I should move my services on Digital Ocean to Sydney but for now, let’s see how far this Digital Ocean Server in Singapore and MongoDB cluster in Sydney can go. I wish I knew about Vultr as they are like Digital Ocean but have a location in Sydney.

Security

Secure (SSL) communication is now mandatory for Apple and Android apps talking over the internet so we can’t eliminate that to speed up the connection but we can move the server. I am using more modern SSL ciphers in my SSL certificate so they may slow down the process also. Here is a speed test of my servers cyphers. If you use stronger security so I expect a small CPU hit.

cipherspeed

fyi: I have a few guides on adding a commercial SSL certificate to a Digital Ocean VM and Updating OpenSSL on a Digital Ocean VM. Guide on configuring NGINX SSL and SSL. Limiting ssh connection rates to prevent brute force attacks.

Server Limitations and Benchmarking

If you are running your website on a shared server (e.g CPanel domain) you may encounter resource limit warnings as web hosts and some providers want to charge you more for moderate to heavy use.

Resource Limit Is Reached 508
The website is temporarily unable to service your request as it exceeded resource limit. Please try again later.

I have never received a resource limit reached warning with Digital Ocean.

Most hosts  (AWS/Digital Ocean/Azure etc) all have limitations on your server and when you exceed a magical limit they restrict your server or start charging excess fees (they are not running a charity).  AWS and Azure have different terminology for CPU credits and you really need to predict your applications CPU usage to factor in the scalability and monthly costs. Servers and databases generally have a limited IOPS (Input/Output operations a second) and lower tier plans offer lower IOPS.  MongoDB Atlas lower tiers have < 120 IOPS a second, middle tiers have  240~2400 IOPS and higher tiers have 3,000,20,000 IOPS

Know your bottlenecks

The siege HTTP stress testing tool is good, the below command will throw 400 local HTTP connections to your website.

#!/bin/bash
siege -t1m -c400 'http://your.server.com/page'

The results seem a bit low: 47.3 trans/sec.  No failed transactions through 🙂

** SIEGE 3.0.5
** Preparing 400 concurrent users for battle.
The server is now under siege...
Lifting the server siege.. done.

Transactions: 2803 hits
Availability: 100.00 %
Elapsed time: 59.26 secs
Data transferred: 79.71 MB
Response time: 7.87 secs
Transaction rate: 47.30 trans/sec
Throughput: 1.35 MB/sec
Concurrency: 372.02
Successful transactions: 2803
Failed transactions: 0
Longest transaction: 8.56
Shortest transaction: 2.37

Sites like http://loader.io/ allow you to hit your web server or web page with many hits a second from outside of your server.  Below I threw 50 concurrent users at a node API endpoint that was hitting a geo query on my MongoDB cluster.

nodebench50c

The server can easily handle 50 concurrent users a second. Latency is an issue though.

nodebench50b

I can see the two secondary MongoDB servers being queried 🙂

nodebench50a

Node has decided to only use one CPU under this light load.

I tried 100 concurrent users over 30 seconds. CPU activity was about 40% of one core.

nodebench50d

I tried again with a 100-200 concurrent user limit (passed). CPU activity was about 50% using two cores.

nodebench50e

I tried again with a 200-400 concurrent user limit over 1 minute (passed). CPU activity was about 80% using two cores.

nodebench50f

nodebench50g

It is nice to know my promised based NodeJS code can handle 400 concurrent users requesting a large dataset from GeoJSON without timeouts. The result is about the same as siege (47.6 trans/sec) The issue now is the delay in the data getting back to the user.

I checked the MongoDB cluster and I was only reaching 0.17 IOPS (maximum 100) and 16% CPU usage so the database cluster is not the bottleneck here.

nodebench50h

Out of curiosity, I ran a 400 connection benchmark to the node server over HTTP instead of HTTPS and the results were near identical (400 concurrent connections with 8000ms delay).

I really need to move my servers closer together to avoid the delays in responding. 47.6 served geo queries (4,112,640 a day) a second with a large payload is ok but it is not good enough for my application yet.

Limiting Access

I may limit access to my API based on geo lookups ( http://ipinfo.io is a good site that allows you to programmatically limit access to your app services) and auth tokens but this will slow down uncached requests.

Scale Up

I can always add more cores or memory to my server in minutes but that requires a shutdown. 400 concurrent users do max my CPU and raise the memory to above 80% so adding more cores and memory would be beneficial.

Digital Ocean does allow me to permanently or temporarily raise the resources of the virtual machine. To obtain 2 more cores (4) and 4x the memory (8GB) I would need to jump to the $80/month plan and adjust the NGINX and Node configuration to use the more cores/ram.

nodebench50i

If my app is profitable I can certainly reinvest.

Scale Out

With MongoDB clusters, I can easily clone ( shard ) a cluster and gain extra throughput if I need it, but with 0.17% of my existing cluster being utilised I should focus on moving servers closer together.

NGINX does have commercial level products that handle scalability but this costs thousands. I could scale out manually by setting up a Node Proxies to point to multiple servers that receive parent calls. This may be more beneficial as Digital Ocean servers start at $5 a month but this would add a whole lot of complexity.

Cache Solutions

  • Nginx Caching
  • OpCache if you are using PHP.
  • Node-cache – In memory caching.
  • Redis – In memory caching.

Monitoring

Monitoring your server and resources is essential in detecting memory leaks and spikes in activity. HTOP is a great monitoring tool from the command line in Linux.

http://pm2.keymetrics.io/ is a good node package monitoring app but it does go a bit crazy with processes on your box.

CPU Busy

Communication

It is a good idea to inform users of server status and issues with delayed queries and when things go down inform people early. Update: Article here on self-service status pages.

censisfail

The Future

UPDATE: 17th August 2016

I set up an Amazon Web Services ECS server ( read AWS setup guide here ) with only 1 CPU and 1GB ram and have easily achieved 700 concurrent connections.  That’s 41,869 geo queries served a minute.

Creating an AWS EC2 Ubuntu 14.04 server with NGINX, Node and MySQL and phpMyAdmin

AWS MongoDB Test

The MongoDB Cluster CPU was 25% usage with  200 query opcounters on each secondary server.

I think I will optimize the AWS OS ‘swappiness’ and performance stats and aim for 2000 queries.

This is how many hits I can get with the CPU remaining under 95% (794 geo serves a second). AMAZING.

AWS MongoDB Test

Another recent benchmark:

AWS Benchmark

UPDATE: 3rd Jan 2017

I decided to ditch the cluster of three AWS servers running MongoDB and instead setup a single MongoDB instance on an Amazon t2.medium server (2 CPU/4GB ram) server for about $50 a month. I can always upgrade to the AWS MongoDB cluster later if I need it.

Ok, I just threw 2000 concurrent users at the new AWS single server MongoDB server and the server was able to handle the delivery (no dropped connections but the average response time was 4,027 ms, this is not ideal but this is 2000 users a second (and that is after API handles the ip (banned list), user account validity, last 5 min query limit check (from MySQL), payload validation on every field and then  MongoDB geo query).

scalability_budget_2017_001

The two cores on the server were hitting about 95% usage. The benchmark here is the same dataset as above but the API has a whole series of payload, user limiting, and logging

Benchmarking with 1000 maintained users a second the average response time is a much lower 1,022 ms. Honestly, if I have 1000-2000 users queries a second I would upgrade the server or add in a series of lower spec AWS t2.miro servers and create my own cluster.

Security

Cheap may not be good (hosting or DIY), do check your website often in https://www.shodan.io and see if it has open software or is known to hackers.
If this guide has helped please consider donating a few dollars.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.7 added self-service status page info and Vultr info

Short: https://fearby.com/go2/scalability/

Filed Under: Cloud, Development, Domain, Hosting, MySQL, NodeJS, Scalability, Scalable, Security, ssl, VM Tagged With: api, digital ocean, mongodb, scalability, server

Computer hardware, clock cycles and code ramblings

April 18, 2016 by Simon Fearby

Modern computers have insane amounts of processing power compared to computers from 5 years ago. Computer memory and storage is cheap but that is no excuse to design and develop bloated webpages and apps. Consumers and customers are very impatient and there are loads of statistics on users abandoning an app or website because it takes more than three seconds to respond or load an app.

You can control the speed of software running on your home computer by upgrading it but you cannot guarantee the performance of apps that run on shared hosting platforms or web hosts.  You can buy a CPanel based web-host or a dedicated server from $5 a month, how can they make money? They do this by virtually hosting your service (web server etc) alongside other hosts and running multiple services on a single processor core. Shared servers are very economical but you are sharing the resources with other users.

If you want maximum performance you can always buy a dedicated server from a cloud server provider but each provider may secretly share the resource’s of that server (more information here: http://blog.cloudharmony.com/2014/07/comparing-cloud-compute-services.html ) and performance may be impacted. Dedicated servers can be very expensive and can run into thousands of dollars per month.

So what can I control?

Writing (or installing) good code is essential, try and optimize everything and know your server’s limitations and bottlenecks. To understand bottlenecks, you need to know about computer hardware. A few lines of code can trigger millions or billions of actions inside a processor.

A computer has the following major components:

  • Hard drive (HDD/SSD): This is where your operating system, software and files are stored when the computer is turned off. Hard drives store magnetic charges (0’s and 1’s) onto spinning round metal platters. A zero is a negative charge and a 1 is a positive charge. Hard drives spin at 5400~15,000 RPM. Data is written with a read/write needle that needs to be positioned over the data bit to read and write. Hard drives are very slow but reliable and each data bit can be read/written to tens of thousands of times. Faster solid-state drives don’t use spinning metal platters and work a bit like memory (see below). Solid-State drives have limited writes per sector though. Read More: https://en.wikipedia.org/wiki/Hard_disk_drive
  • Memory (RAM): Computer memory is basically a large array or very fast storage that the processor reads and writes data (0’s and 1’s). Memory is like a massive spreadsheet grid and accessing data from memory is 1000x faster than accessing data from a hard drive.  Memory stores data as static charges in silicon microchips and each storage bit can be changed millions of times. When a computer is turned off the memory is wiped. Read More: https://en.wikipedia.org/wiki/Computer_memory
  • Processor (CPU): This is the chip that does the primary calculations and controls just about everything. A processor can perform various predetermined functions and read and write to memory/hard drives or send data over a USB cable or network connection. Processors are quite dumb and it has to keep queues (pipelines) of things to do in it’s the internal cache (memory) between cycles.  A clock cycle is single step where the processor (and all of it’s cores) do one thing and get ready for the next clock cycle, all clock cycles in a software routine are linked and if one instruction fails all following linked instructions have to be cleared and dealt with or errors and blue screens can happen. A processors speed is a total of how may clock cycles it can perform in a second and a modern computer can process 3,500,000,000 (3.5 Ghz) cycles a second. A processor can calculate one complex instruction or multiple simple instructions in one cycle. Most processors have multiple cores that can each perform calculations in a clock cycle. But don’t be fooled many clock cycles are spent waiting for data to be read/written from memory/hard drive or loaded from the processor’s cache. A processors instruction pipeline has 4 main states for each possible action in a cycles execution pipeline (“Fetch”, “Decode”, “Execute” and “Write back”). (e.g The processor may be asked to add (fetch) variable1+variable2, the (decode) gets the values from memory, (execute) performs the calculation and “write back” writes the result back to memory. ) See a complete list of Intel instruction here and here ). Read More: https://en.wikipedia.org/wiki/Central_processing_unit

Processors are mostly waiting for data to be fetched to be processed.  There is no such thing as 100% efficient code.

If software needs to read a file from a spinning hard drive has a mandatory latency period (https://en.wikipedia.org/wiki/Hard_disk_drive_performance_characteristics ) where the hard drives read needle moves in or out and reads the data form the right sectors and returns the data.  A 3.5 Ghz computer has to wait for an approximate 19,460,000 clock cycles for a sector on a hard drive to be under the read head. The data still has to be moved from the hard drive through the processor and into memory.  Luckily processors have fantastic calculation branch prediction abilities ( https://en.wikipedia.org/wiki/Branch_predictor) and even though the software has asked for a file to be read the processor can work on 19 million other cycles before checking to see if the data has returned from the hard drive.

Caching content

One solution is to have software or servers cache certain files in memory to speed up the delivery of files. The latest DDR4 computer memory runs as blistering speeds of 2,400Mhz (2,400,000,000 cycles a second) so it should keep up with a 2.4Ghz computer? Memory is cheap and fast but computer memory has a huge limitation.  You can’t just ask memory to return the value of a memory cell and expect it in a few cycles. The processor has to essentially guide the memory module to activate the required electrical columns and rows to allow that that value to be read and return it to a processor. This is like a giving instruction to a driver over a phone, it takes time for the driver to listen, turn a corner, drive down a street and then turn another corner just to get to the destination.  The processor has to manage millions of memory read and writes a second. Memory can’t direct itself to the memory value, the processor has to do that.

Memory timings are called RAM timings and it is explained better here ( http://www.hardwaresecrets.com/understanding-ram-timings/ ).  It takes modern DDR4 memory module about 15 clock cycles to just enable the column circuit for a memory cell to be activated, then another 15 clock cycles to activate the row and a whole load of other cycles to read the data. Reading a 1 MB file from memory may take 100,000,000 clock cycles (and that is not factoring in the processor is working on other tasks. A computer process is a name given to software code that has been handed over to the processor, software code is loaded into the processor/memory as instructions and depending on the code and user interactions different parts of the software’s instructions are loaded into the processor. In any given second a computer program may enter and leave a processor over 1,000 times and processors internal memory is quite small.

Benchmarking

Choosing a good host to place your website/mobile app or API’s is very important, sometimes the biggest provider is not the fastest. You should benchmark how long actions take on your site and what the theoretical maximum limit is. Do you need more memory or cores? Hosts will always sell you more resources for money.

http://www.webpagetest.org/ is a great site to benchmark how long your website takes to deliver each part of your website to customers around the world.  You can minify (shrink) your code and images to reduce the processing time per page load.

If you are keen research PHP caching plugins like OpCache ( http://php.net/manual/en/book.opcache.php ), MemcahedD (https://www.digitalocean.com/community/tutorials/how-to-install-and-use-memcache-on-ubuntu-14-04) for PHP or MySQL  or WordPress WP-Total-Cache (https://wordpress.org/plugins/w3-total-cache/ ) plugin.

Placing your website or application databases close to your customers.  In Australia, it takes 1/5 of a second minimum for a server outside of Australia to respond.  A website that loads 30 resources would also add the delays between your server and customers (30×1/5 of a second add’s up).

Consider merging and minifying website resources ( http://www.minifyweb.com/ ) to lower the number of files and file sizes that you deliver to users. Most importantly monitor your website 24/7 to see if it is slowing down. I use http://monitis.com to monitor server performance remotely.

Summary

I hope I have not confused you too much. Try some videos below to learn more.

Good Videos: 

How a CPU Works:

How Processors are Made:

How a Hard Drive works in Slow Motion – The Slow Mo Guys

What’s Inside a CPU?

Zoom Into a Microchip (Narrated)

How computers work in less than 20 minutes

Read some of my other development-related guides here https://fearby.com/
Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

Filed Under: Cloud, Development, Domain, Hosting, MySQL, Security, VM, Wordpress Tagged With: code, hard drive, memory, optimize, processor, solid state

How to buy a new domain (dedicated server from digital ocean) and add a SSL certificate from namecheap.

December 3, 2015 by Simon Fearby

This guide will show you how to buy a domain and and link it to a Digital Ocean VM.

Update (June 2018): I don’t use Digital Ocean anymore. I moved my domain to UpCloud (they are that awesome). Use this link to signup and get $25 free credit. Read the steps I took to move my domain to UpCloud here.

Upcloud Site Speed in GTMetrix

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

This old post is available fyi,

1. How to buy a new website domain from namecheap.com

1.1 Create an account at namecheap.com then navigate to registrations

1.2 Search for your domain (don’t forget to click show more to see other domain extension types).

1.3 Select the domain you want.

1.4 I am going to opt for a free year of Free WhoisGuard – (WhoisGuard is a service that allows customers to keep their domain contact details hidden from spammers, marketing firms and online fraudsters. When purchased, the WhoisGuard subscription is permanently assigned to a domain and stays attached to it as long as the fee is paid).

1.5 I will also opt-in into the discounted PositiveSSL for $2.74 (bargain) (fyi: name cheap ssl types).

1.6 Check the name cheap coupons page and apply this months coupon for 10% off.

1.7 Confirmed the order for $11.05 USD.

1.8 Congratulations you have just ordered a domain and SSL certificate.

More details: https://www.digitalocean.com/community/tutorials/how-to-point-to-digitalocean-nameservers-from-common-domain-registrars

2. Create a http://www.c9.io account

This will give you a nice UI to manager your unmanaged server.

2.1 Upgrade from the free account to the “Micro $9.00 / monthly” at https://c9.io/account/billing (this will allow you to use the c9.io IDE to connect to as many Ubuntu VM’s as you wish).

3. Buy the hosting (droplet) from digital ocean

3.1 Go to https://wwww.digitalocean.com and create an account and log in.

Note: If you are adding an additional server (droplet) to a digital ocean account and you want the droplets to talk to each other make sure your existing servers have a private network setup.

3.2 Click Create Droplet

3.3 Enter a server name: e.g “yourdomainserver”

3.4 Select a Server Size (this can be upgraded later), Digital Ocean recommends a server with at least 30GB for a WordPress install (but you can upgrade later).

3.5 Select an Image (you can stick with a plain ubuntu image) but it may save you time to install an image with the LAMP stack already on it.

LAMP stack is a popular open-source web platform commonly used to run dynamic websites and servers. It includes Linux, Apache, MySQL, and PHP/Python/Perl and is considered by many the platform of choice for development of high-performance web applications which require a solid and reliable foundation.  I will select LAMP.

3.6 Tick “private networking” if you think you may add more servers later (growing business)?

3.7 Paste in your SSH key from your c9.io account at https://c9.io/account/ssh (this is important, don’t skip this).

3.8 Click Create Droplet

3.9 Congratulations you have just created an Ubuntu VM in the cloud.

3.10 If you type your droplets IP into a web browser it should load your pages from your web server.

3.11 You can view your ubuntu droplet details in the digital ocean portal.  You may need to reboot the server, make snapshots (backups) of reset passwords here.

3.12 You will need to change your droplets root password that was emailed to you from digital ocean (if you never received one you can reset a root password change in the digitalocean.com portal).  You can change your password by using the VNC window in the digital ocean portal https://cloud.digitalocean.com/droplets/ -> Access -> Console Access). If you had no luck changing you password with the VNC method you may use your Mac terminal and type: ssh [email protected] (where xx is your droplets IP) – then type yes, enter your password from the digital ocean email and change the password to a new/strong password (and write it down).

3.13 Now we will need to install the distro stable nodejs (for c9.io IDE) into the droplet by typing “sudo apt-get update” then “sudo apt-get install nodejs“.

4. Now we can link the digital ocean ubuntu server to the http://www.c9.io IDE.

4.1 Login to your c9.io account.

4.2 Click Create a new workspace.

4.3 Enter a Workspace name and description.

4.4 Click Remote SSH Workspace

4.5 Enter “root” as the username

4.6 Type in your new servers IP (obtained from viewing your droplet at digital ocean https://cloud.digitalocean.com/droplets ).

4.6 Set the initial path as: ./

4.7 Set the NodeJS path as: /user/bin/nodejs

4.7 Ensure your SSH key is the same one you entered ito the droplet.

4.8 Click Create Workspace.

Troubleshooting: If your workspace cannot login you may need to SSH back into your droplet (via Digital ocean VNC or telnet SSH and paste your c9.io SSH key into the ~/authorized_keys file and save it). I used the command “sudo nano ~/.ssh/authorized_keys”, pasted in my c9.io SSH key then pressed CTRL+0 then ENTER then CRRL+X

4.9 If all goes well you will see c9.io now has a workspace shortcut for you to launch your website.

4.10 You will be able to connect to your droplet from c9.io and edit files or upload files (without the hassle of using SFTP and CPanel).

5. No we will link the domain name to the IP based droplet.

5.1 Login to your name cheap account.

5.2 Click “Account” then  “Domain List“, turn off domain parking and then click  “Manage”  (next to the new domain) then click “Advanced DNS”

5.3 Click “Edit” next to “Domain Nameserver Type” then choose “Custom“.

5.4 Add the following three name servers “ns1.digitalocean.com“, “ns2.digitalocean.com” and “ns3.digitalocean.com” and click “Save Changes“.

namecheapnameservers

5.5 Login to https://cloud.digitalocean.com/domains and select your droplet and type your domain name (e.g “yourdomain.com”) into the domain box and select your droplet

5.6 Configure the following DNS A Name records “@”-“XXX.XXX.XXX.XXX” where XXX is our server name and CName Records “www”-“www.yourdomain.com.” and “*”-“www.yourdomain.com.”

It can take from 24-48 hours for DNS to replicate around the world so I would suggest you goto bed at this stage: You can use https://www.whatsmydns.net/#A/yourdomain.com to check the DNS replication progress.

5.7 But if you are impatient check out the DNS replication around the world using this link: https://www.whatsmydns.net

fyi: The full name cheap DNS guide is here.

fyi: The Digital Ocean DNS guide is located here

Setup a SSL Certificate

You can skip section 6 to 6.17 and install a free SSL certificate if you wish (read this guide on using Lets Encrypt ).

Follow the rest of this guide if you want to buy an SSL cert from Namecheap (Comodo (Lets Encrypt is easier)).

6. Login to the Namecheap server.

6.1 Open your c9.io workspace to your domain

6.2 Click the Windows then New Terminal menu

6.3 Type: cd ~/.ssh/

6.4 openssl req -newkey rsa:2048 -nodes -keyout servername.key -out servername.csr

6.2 Type the following to generate CSR files  (my server is “servername.com”, replace this with your server name ).

# cd ~/.ssh
.ssh#

openssl req -newkey rsa:2048 -nodes -keyout servername.key -out servername.csr

Generating a 2048 bit RSA private key
.............................+++
............+++
writing new private key to 'servername.key'
-----
You are about to be asked to enter information that will be incorporated
into your certificate request.
What you are about to enter is what is called a Distinguished Name or a DN.
There are quite a few fields but you can leave some blank
For some fields there will be a default value,
If you enter '.', the field will be left blank.
-----
Country Name (2 letter code) [AU]:AU
State or Province Name (full name) [Some-State]:New South Wales
Locality Name (eg, city) []:Your City
Organization Name (eg, company) [Internet Widgits Pty Ltd]:Fearby.com
Organizational Unit Name (eg, section) []:Developer
Common Name (e.g. server FQDN or YOUR name) []:servername.com
Email Address []: [email protected]

Please enter the following 'extra' attributes
to be sent with your certificate request
A challenge password []:****************
string is too long, it needs to be less than  20 bytes long
A challenge password []:***************
An optional company name []:Your Nmae
~/.ssh# ls -al
total 20
drwx------ 2 root root 4096 Oct 17 10:20 .
drwx------ 7 root root 4096 Oct 17 10:17 ..
-rw------- 1 root root  399 Oct 17 08:06 authorized_keys
-rw-r--r-- 1 root root 1175 Oct 17 10:20 servername.csr
-rw-r--r-- 1 root root 1704 Oct 17 10:20 servername.key

6.3 Using the folder structure in c9.io browser to /root/.ssh/ and open the text file “servername.csr” and copy the file contents.

6.4 In a separate window go to https://ap.www.namecheap.com/ProductList/SslCertificates paste in the “” file contents and click Submit

6.5 Verify your details and click next

6.6 Next you will need to verify your domain by downloading and uploading a file to your server. Under “DCV Method” select “HTTP” and follow the prompts at name cheap to download the file.

6.7 Complete the Form (company contacts and click next).

6.8  Go to Certificate Details page to download the validation file. Or you can wait for the email with zip file attached.

fyi: the support forums for this certificate are https://support.comodo.com (but the site is rubbish, most pages load empty (e.g this one)).

6.9 Under “DCV Methods in Use” click ‘Edit Methods” then “Download File”

6.10 Using the c9.io interface upload the file to the /var/www/html folder (drag and drop)

6.11 Wait 1/2 hour and then go back to your name cheap dashboard and see if the certificate has been verified (it may take longer than that).

6.12 After a while a certificate will be issued, Unser See Details click Download Certificate.

6.13 Upload the certificate files (“weatherpanorama_link.ca-bundle”,”weatherpanorama_link.crt” and “servername.p7b” ) files using the c9.io IDE to /root/.ssh/

6.14 Add this “ServerName localhost” to “/etc/apache2/apache2.conf”.

6.16 In a c9.io terminal run this command “sudo nano /etc/hosts” and add this line “127.0.0.1 servername.com”

6.17 Run this command in a  c9.io terminal  ‘sudo a2enmod ssl”

fyi: Comodo support forums: https://support.comodo.com/index.php?/Default/Knowledgebase/List/Index/1

fyi: Comodo apache certificate installation instructions: https://support.comodo.com/index.php?/Default/Knowledgebase/Article/View/637/37/certificate-installation-apache–mod_ssl

Don’t forget to cache content to optimise your Web server

Security

Having a server introduces risks, do check your website often in https://www.shodan.io and see if it has open software or is known to hackers.
todo: SSL https://www.namecheap.com/support/knowledgebase/article.aspx/794/67/how-to-activate-ssl-certificate

Easily deploy an SSD cloud server on @DigitalOcean in 55 seconds. Sign up using my link and receive $10 in credit: https://wwww.digitalocean.com

end skip —

Seriously Lets Encrypt allows you to add an  SSL cert in minutes (over Comodo SSL certificates)

Donate and make this blog better


Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

v1.7 added some more.

Filed Under: Cloud, Domain, Hosting, Linux, MySQL, Security, ssl, VM Tagged With: digital ocean, domain, namecheap, ssl

How to transfer an existing website to jumba.com.au (uber.com.au)

November 29, 2015 by Simon Fearby

Note: Transferring domains can be a bit like the old west, not everything goes to plan and sometimes the domain registration is transferred to jumba/uber and other times not. Worst case get on the Live Chat to Jumba/Uber and sort it out.

P.S I don’t recommend Uber anymore, I prefer managing my own domain and email on my own server (and I moved my WordPress). Managing my own domain is cheaper and faster than a Uber domain.

1.Before you begin you will need your domain’s “EPP code” (site transfer password) from your existing host or ISP and you will need a full backup of your website (unless your are starting from scratch).

2. Goto http://www.jumba.com.au and select Transfer then enter your domain and click Check.transfer_domain_01

3. Enter your domain “EPP Code” (Authorization Key) and click Transfer Domain.

transfer_domain_02

4. Click Continue.

transfer_domain_03

5. Click “cPanel Hosting“.

transfer_domain_04

6. Choose Subscription Period and Plan and click Continue.

transfer_domain_05

7. Complete the Contact and Payment details and submit.

transfer_domain_06

8. After a few hours you will receive a confirmation email from Jumba/Uber along with login details to https://my.uber.com.au/. I would recommend you login to the my.uber portal and click Pay Invoice under billing as sometimes the 1st payment is not automatic and your new domain wont be transferred until this is paid.

9. You will receive a domain transfer confirmation email from openconnect.com.au, you will need to click the link to initiate the domain transfer.

transfer_domain_08

10. You may receive a warning from your existing domain register. In my case below I had to ignore this email.

transfer_domain_09

It may take 5 days for your old hosts to allow the domain to be transferred away.

11. At this stage you may notice your old website is still up at the old hosts. You may think that transferring the domain will transfer the domain after 5 days (wrong).  You will need to get in touch with UBER after 5 days (live chat is easiest) and ask for the “ENOM DNS record to be updated from your old host to UBER”. Once Uber say they have done this the new domain DNS records will replicate around the internet in 4-48 hours.  (If you are keen you can login to your old domain register and update the name servers for the old host and insert the new hosts DNS servers before you transfer).

  • ns1.jumba.net.au 203.88.119.146
  • ns2.jumba.net.au 113.20.10.154

You can use this site ( https://www.whatsmydns.net/#A/fearby.com ) to see if your website location is updated around the world.

How to update DNS records:

  • Crazy Domains: How to Update a Name Servers IP Address

  • Name Cheap: How can I change the name servers for my domain?
  • Go Daddy: Change Name servers for Your Domain Names.

12. Once your domain is transferred you have the fun of setting up an FTP account, email and new website.

13. You can access your domains cPanel via www.yourcdomain.com/cpanel (you will have a link to your cpanel from your my.uber.com.au portal).

CPanel Summary

14. Once your new domain is set up you can cancel any payments for the old domain/hosting.

Back to Fearby.com.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

Filed Under: Domain, Hosting Tagged With: domain, jumba, transfer, uber

Adding a commercial SSL certificate to a Digital Ocean VM

June 21, 2015 by Simon Fearby

fyi: Consider reading this first (newer blog post):  How to buy a new domain and SSL cert from NameCheap, a Server from Digital Ocean and configure it.

If you have read my quickest way to setup a scalable development ide and web server guide chances are you setup a www.c9.io development IDE connected and Digital Ocean Ubuntu VM in the cloud for about $5 a month.  It did not take me long to install an NGINX web server, PHP, MySQL and phpMyAdmin sites. The next logical step is to secure my site with an SSL certificate.

I have purchased commercial SSL certificate in the past for a CPanel sub domain and they cost about $150 a year.  I always thought the certificate was set in stone and if it was a weak certificate it would perform poorly in the essential https://www.ssllabs.com/ssltest/index.html certificate tester.

I ran a quick test over my previously purchased managed host provided certificate (lets just say it performs poorly).

Managed WebServices SSL
Managed WebServices SSL Report

Generating a $0 self signed SSL Certificate (Digital Ocean VM)

Digital Ocean have fantastic guides and I searched Google for “digital ocean how to create an ssl certificate” and read this guide. Within a few minutes I had generated a self signed certificate and added it to my NGINX config and had SSL enabled on my site.  The only problem the certificate said it was not trusted by a third party (this may be ok for a closed development box but it would not be good on a production environment).

Self Signed Certificates are not trusted
Self Signed Certificates are not trusted

Generating a $9 commercial SSL Certificate (Digital Ocean VM)

I googled and found this Digital Ocean guide How To Install an SSL Certificate from a Commercial Certificate Authority.

Without listing each step I performed I was able to generate a “key” and “csr” file (from the digital ocean guide, I ignored the Namecheap’s guide). These files are needed to seed the commercial SSL certificate.

I decided to buy a domain certificate from RapidSSL via Namecheap (as they responded to a Livechat support request where GoDaddy ignored the live chat). A Namecheap certificate for my subdomain was going to cost me $9 US a year (that is mega cheap compared to the $150 a CPanel host was going to charge me).  Maybe the $9 certificate will be crap?

I followed the digital ocean guide and to my surprise I had a valid certificate emailed to me within 15 minutes once I followed the process to purchase, verify activate the certificate. To Namecheap’s credit the live chat person (“Anastasia B”) stuck with me as answered frequent questions I had (I thought $9 was too good to be true).

Once I had the commercial keys I was able to generate the private/public keys that feed into the commercial certificate with this command (replace “thesubdomain” with your subdomain and the “the domain” with your domain, if you are not applying the certificate to a subdomain then exclude the sub domain.).

>cd /etc/nginx/ssl/

> openssl req -newkey rsa:2048 -nodes -keyout thesubdomain_thedomain_com.key -out thesubdomain_thedomain_com.csr

The contents of the locally generated certificates were then pasted into the Namecheap SSL pages based on the digital ocean guide. At the end of the Namecheap purchase and verification process I was emailed 4 files that make up the certificate. The Digital Ocean and Namecheap guides were a bit short on combining the certificated but this was the working command to merge the bits intone valid certificate.

> cd /etc/nginx/ssl/

>cat thesubdomain_thedomain_com.crt COMODORSADomainValidationSecureServerCA.crt COMODORSAAddTrustCA.crt AddTrustExternalCARoot.crt >> cert_chain.crt

Then all I had to do was configure NGINX to use the certificate.

> listen 443 ssl;
> server_name thesubdomain.thedomain.com;
> ssl_certificate /etc/nginx/ssl/cert_chain.crt;
> ssl_certificate_key /etc/nginx/ssl/thesubdomain.thedomain.key;

SSL Enabled

A quick restart of the NGINX server and the certificate was good to go, I now had trusted SSL certificate enabled on my site.

I ran a SSL labs test over the site and got a lame C ranking.  WTF, I though SSL was supposed to make sites secure. Maybe there is more I can do to make this secure.

SSL Test After Install
SSL Test After Install

Research and Lockdown Mode

I googled as much as I could find on NGINX and SSL security.

Essential reading:

  • https://raymii.org/s/tutorials/Strong_SSL_Security_On_nginx.html
  • https://cipherli.st
  • https://gist.github.com/plentz/6737338
  • https://www.mare-system.de/guide-to-nginx-ssl-spdy-hsts/
  • https://weakdh.org
  • https://www.owasp.org/index.php/List_of_useful_HTTP_headers

To me the biggest failing point in the OpenSSL test was a weak PRIME in the Diffe-Hellman crypto,  I thought I could just disable these crypto algorithms but this was not the case.  The secret is to generate a new 2048 bit key on my digital ocean server for ssl to use in connections with browsers instead of the known 1024 bit key.  This was as simple as running this command (and waiting 10 mins):

>cd /etc/nginx/ssl/

> openssl dhparam -out dhparams.pem 2048
>Generating DH parameters, 2048 bit long safe prime, generator 2
>This is going to take a long time

Then when the key is generated you can add it to your NGINX config

>  ssl_dhparam /etc/nginx/path/dhparams.pem;

So after much trial and error this is the bulk of my NGINX configuration

listen 443 ssl;

# Change to your server
server_name thesubdomain.thedomain.com;
# Location of the private key and merged certificates
ssl_certificate /etc/nginx/ssl/cert_chain.crt;
ssl_certificate_key /etc/nginx/ssl/thesubdomain.thedomain.com.key;

# Here are the cyphers we are ignoring
# ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA';

# Only use a small set of ciphers (may not work on older devices or browsers (but screw them)
ssl_ciphers "EECDH+AESGCM:EDH+AESGCM:AES256+EECDH:AES256+EDH";

# Force only allowing the ciphers above
ssl_prefer_server_ciphers on;

#use the 2048bit DH key
ssl_dhparam /etc/nginx/ssl/dhparams.pem;

# Don't allow old encryption methods like SSL3
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;

# Set SSL caching and storage/timeout values: 
# More info: http://nginx.com/blog/improve-seo-https-nginx/
ssl_session_cache shared:SSL:40m;
ssl_session_timeout 4h;
# Prevent Clickjacking
add_header X-Frame-Options DENY;

# Prevent MIME Sniffing
add_header X-Content-Type-Options nosniff;

# Disable session tickets
ssl_session_tickets off; # Requires nginx >= 1.5.9

# OCSP (Online Certificate Status Protocol) is a protocol for checking if a SSL certificate has been revoked
ssl_stapling on; # Requires nginx >= 1.3.7
ssl_stapling_verify on; # Requires nginx => 1.3.7

# Use Google DNS
resolver 8.8.8.8 8.8.4.4 valid=300s;
resolver_timeout 5s;

# force https over http
add_header Strict-Transport-Security "max-age=63072000; includeSubdomains; preload";

# No need to manually redirect all traffic to https as the header above does this
#rewrite ^/(.*) https://thesubdomain.thedomain.com/$1 permanent;

Conclusion

This is my result on SSLLabs SSL test now. Not bay for $9 and a few hours researching.

Final SSL Labs Score
Final SSL Labs Score

A big Thank You goes to “Anastasia B” on the Namecheap Livechat, they stuck with me while I jumped ahead and ignored the guides.

If you need an SSL certificate choose https://www.namecheap.com/ and don’t forget http://www.digitalocean.com for full access VM’s.

Also listen to this podcast of you an to understand how HTTPS and the internt works.

Also check out how to update your Open SSL and security: https://fearby.com/article/update-openssl-on-a-digital-ocean-vm/

Security

Having ssl may not be enough, do check your website often in https://www.shodan.io and see if it has open software or is known to hackers.
Please signup for our newsletter
[mc4wp_form]

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]

Filed Under: Cloud, Development, Domain, Hosting, Linux, Scalable, Security, ssl, VM Tagged With: encryption, ssl, ssl certificate

The quickest way to setup a scalable development ide and web server

June 8, 2015 by Simon Fearby

fyi: Consider reading this first (newer blog post):  How to buy a new domain and SSL cert from NameCheap, a Server from Digital Ocean and configure it.

Buying a Domain

Buy a domain name from Namecheap here.

Domain names for just 88 cents!

Why do I need a free Development IDE/VM

  • You already have heaps of sub domains/sites/blogs on one CPanel domain and you don’t want to slow down your server anymore.
  • You need a new collaboration web server setup in minutes.
  • You want a server where you have full control to install the latest widgets (NGNIX, NodeJS etc).
  • You want a single interface where you can deploy, develop and test online.
  • You want to save money
  • You want to access and edit your sites from anywhere.

The Solution

Cloud9 ( http://www.c9.io ) combines a powerful online code editor with a full Ubuntu server in the cloud. Simply pick your configuration, develop an app, invite others in to preview and help code.

Update 2018: For the best performing VM host (UpCloud) read my guide on the awesome UpCloud VM hosts (get $25 free credit by signing up here).

Now there is no need to spend valuable development time on setting up hardware/software platform. You can create, build and run almost any development stack in minutes. Cloud9 maintain the server and you have full control it.

Signing up for a C9 account.

Cloud 9 offer a number of hosting plans (one free) with a range of hardware resources for when you want to scale up.  The free tier is great if you want to keep your development environment closed.  Use this link and get $19 free credit https://c9.io/c/DLtakOtNcba

c92016

Before you connect to your digital ocean VM connect to the server via the console in the digital ocdan admin pane (you may need to reset your root password) and then install NodeJS (Required by c9.io IDE).

Installing NodeJS

  • curl -sL https://deb.nodesource.com/setup_6.x | sudo -E bash –
  • sudo apt-get install -y nodejs
  • node -v

Now you will have node v6.3.0

Create a development Workspace.

Once you create a Cloud 9 account you can create a VM workspace. You can choose some common software packages to installed by default.  Don’t worry you can install anything you want later from the command line in the VM.

c92016b

How simple is that, a new development environment in minutes.

Development Workspace

You can edit new code, play with WordPress or NodeJS all from the one Cloud9 IDE. The Cloud 9 IDE allows you to open a “bash terminal” tab, folder list, web browser, code window and debug tools (all from the web).

Code on the left, WordPress on the right, terminal on the bottom 🙂

Edit and View Code Workspace
Edit and View Code Workspace

C9 IDE

You can Install what you want

Because you have access to the Linux bash terminal you can for example type the following to install an NGNIX web server.

  1. sudo apt-get update
  2. sudo apt-get install nginx
  3. sudo service nginx start

Full Bash Terminal

Full Bash Terminal

As usual installing stuff in Linux requires loads of googling and editing config files so beware.

What are the downsides of a c9.io Ubuntu server?

Your development environment (public of private) is mostly off limits to the outside world unless you invite people in who have a Cloud 9 account.  This is great if you want to develop a customers website off the grid and keep is secure or share the development with other developers.  Cloud 9 don’t really have a “goto production plan” so you will need to find a host to deploy to when you are ready.

Luckily this is where http://www.digitalocean.com comes in, Digital Ocean allow you to create a real/public VM (just like Cloud 9) and best of all you can connect it to the Cloud 9 IDE..

The only downside is you will need to move on from the free Cloud 9 account and pay $9 a month to allow you to connect securely (via SSH) to your new (Real) Digital Ocean VM.  On the up side the $19 month plan gives you twice the ram (1GB) and 10x the storage (10GB) and you can have 2 premium (private accounts).

Signing up for a Digital OceanAccount

The cheapest Digital Ocean Hosting plan is $5 a month. If you want $10 free credit at Digital Ocean (two months free) please use this link: https://www.digitalocean.com/?refcode=99a5082b6de5

Tip:

Granting SSH Access (before you create a server (droplet))

Tip: Add your Cloud 9 SSH key to your account before creating a droplet (VM). I added my SSH key when the VM/Droplet was create and I could not connect to it from Cloud 9. I then deleted the droplet, added the SSH key to my Digital Ocean account here then created the Droplet (VM) and all was ok.  You can find your SSH key on the front page of your cloud 9 desktop.

do2016b

This is the magic option, if you skip this you will be emailed a password to your VM and you will be on your own connecting to it with a secure terminal window. If you add your Cloud 9 SSH key ( found in your Cloud 9 IDE https://cloud.digitalocean.com/settings/security ) you can connect to and control your new Digital Ocean VM from the Cloud 9 UI.

Now you can create a server (droplet)

do2016

A digital ocean server can be setup in minutes. If you only use it for 2 weeks you will only be charged for 2 weeks. If you use my link your first 2 months are free (if you select a $5 server).

Your server should be created in well under 5 minutes. Write down your VM’s IP.

Digital Ocean Droplet (VM) Created
Digital Ocean Droplet (VM) Created

Connecting your C9 account to Digital Ocean Droplet

Now go back to Cloud 9 and login. Go here ( https://c9.io/account/ssh ) and add your SSH key from Digital Ocean.

Cloud 9 guide on setting up SSH on your server: https://docs.c9.io/docs/running-your-own-ssh-workspace

Advertisement:



fyi: Here is a more recent post of how to connect Cloud 9 with AWS.

Create a new workspace with these settings (but use your IP from digital ocean) to connect to your new Digital Ocean VM.

c92016c

Now you can develop like a pro. Cloud 9 will allow you to login to your development environment from anywhere and resume where you left off.

Traps and Tips

  • Consider buying this course: https://www.udemy.com/all-about-nodejs/?dtcode=9TQkocT33Eck 
  • Get your VM/Droplets right (if they don’t work as expected delete them and start again).
  • Know how to safely shutdown a Linux VM.
  • Google.
  • If you receive the error “Could not execute node.js on [email protected] bash: /usr/bin/nodejs:” in C9 when connecting to the server try installing node via the digital oceans manual console window.

Connecting your new Cloud IP to a CPanel sub domain

If you have CPanel domain elsewhere you can link your new Digital Ocean Cloud VM IP to a new sub domain.

  1. Login to your CPanel domain UI.
  2. Click Simple DNS Zone Editor
  3. Type the sub domain name (swap my domain.com to your domain).
  4. Enter the IP for your Digital Ocean domain (you get this from the Digital Ocean account page).
  5. Click Add a record.

    DNS Zone
    DNS Zone
  6. Now when someone types http://newcloud.mydomain.com they get redirected to your new cloud domain but the URL stays the same (how professional is that).
  7. You can add multiple A name records pointing to the same IP.

Summary

$19 a month gives me a kick arse www.c9.io development environment and a few VMs.

$5 a month gives me my own real VM that I can scale up.

Coupon

You can easily deploy an SSD cloud server in 55 seconds for $5 a month. Sign up using my link and receive $10 in credit: https://www.digitalocean.com/?refcode=99a5082b6de5

Security

After a few weeks, do check your website with https://www.shodan.io and see if it has open software or is known to hackers.

Donate and make this blog better




Ask a question or recommend an article
[contact-form-7 id=”30″ title=”Ask a Question”]
V1.6 security

Filed Under: Cloud, Development, Domain, Hosting, Linux, Scalable, Security, VM Tagged With: cloud, cloud 9, code, development, digital ocean, ide, vm

  • « Go to Previous Page
  • Go to page 1
  • Go to page 2

Primary Sidebar

Poll

What would you like to see more posts about?
Results

Support this Blog

Create your own server today (support me by using these links

Create your own server on UpCloud here ($25 free credit).

Create your own server on Vultr here.

Create your own server on Digital Ocean here ($10 free credit).

Remember you can install the Runcloud server management dashboard here if you need DevOps help.

Advertisement:

Tags

2FA (9) Advice (17) Analytics (9) App (9) Apple (10) AWS (9) Backup (21) Business (8) CDN (8) Cloud (49) Cloudflare (8) Code (8) Development (26) Digital Ocean (13) DNS (11) Domain (27) Firewall (12) Git (7) Hosting (18) HTTPS (6) IoT (9) LetsEncrypt (7) Linux (20) Marketing (11) MySQL (24) NGINX (11) NodeJS (11) OS (10) PHP (13) Scalability (12) Scalable (14) Security (44) SEO (7) Server (26) Software (7) SSH (7) ssl (17) Tech Advice (9) Ubuntu (39) Uncategorized (23) UpCloud (12) VM (44) Vultr (24) Website (14) Wordpress (25)

Disclaimer

Terms And Conditions Of Use All content provided on this "www.fearby.com" blog is for informational purposes only. Views are his own and not his employers. The owner of this blog makes no representations as to the accuracy or completeness of any information on this site or found by following any link on this site. Never make changes to a live site without backing it up first.

Advertisement:

Footer

Popular

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • I moved my domain to UpCloud (on the other side of the world) from Vultr (Sydney) and could not be happier with the performance.
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Add Google AdWords to your WordPress blog

Security

  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Setup two factor authenticator protection at login on Ubuntu or Debian
  • Using the Yubico YubiKey NEO hardware-based two-factor authentication device to improve authentication and logins to OSX and software
  • Setting up DNSSEC on a Namecheap domain hosted on UpCloud using CloudFlare
  • Set up Feature-Policy, Referrer-Policy and Content Security Policy headers in Nginx
  • Securing Google G Suite email by setting up SPF, DKIM and DMARC with Cloudflare
  • Enabling TLS 1.3 SSL on a NGINX Website (Ubuntu 16.04 server) that is using Cloudflare
  • Using the Qualys FreeScan Scanner to test your website for online vulnerabilities
  • Beyond SSL with Content Security Policy, Public Key Pinning etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Run an Ubuntu VM system audit with Lynis
  • Securing Ubuntu in the cloud
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider

Code

  • How to code PHP on your localhost and deploy to the cloud via SFTP with PHPStorm by Jet Brains
  • Useful Java FX Code I use in a project using IntelliJ IDEA and jdk1.8.0_161.jdk
  • No matter what server-provider you are using I strongly recommend you have a hot spare ready on a different provider
  • How to setup PHP FPM on demand child workers in PHP 7.x to increase website traffic
  • Installing Android Studio 3 and creating your first Kotlin Android App
  • PHP 7 code to send object oriented sanitised input data via bound parameters to a MYSQL database
  • How to use Sublime Text editor locally to edit code files on a remote server via SSH
  • Creating your first Java FX app and using the Gluon Scene Builder in the IntelliJ IDEA IDE
  • Deploying nodejs apps in the background and monitoring them with PM2 from keymetrics.io

Tech

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Is OSX Mojave on a 2014 MacBook Pro slower or faster than High Sierra
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • The case of the overheating Mac Book Pro and Occam’s Razor
  • Useful Linux Terminal Commands
  • Useful OSX Terminal Commands
  • Useful Linux Terminal Commands
  • What is the difference between 2D, 3D, 360 Video, AR, AR2D, AR3D, MR, VR and HR?
  • Application scalability on a budget (my journey)
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.

Wordpress

  • Replacing Google Analytics with Piwik/Matomo for a locally hosted privacy focused open source analytics solution
  • Setting web push notifications in WordPress with OneSignal
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..
  • Check the compatibility of your WordPress theme and plugin code with PHP Compatibility Checker
  • Add two factor auth login protection to WordPress with YubiCo hardware YubiKeys and or 2FA Authenticator App
  • Monitor server performance with NixStats and receive alerts by SMS, Push, Email, Telegram etc
  • Upgraded to Wordfence Premium to get real-time login defence, malware scanner and two-factor authentication for WordPress logins
  • Wordfence Security Plugin for WordPress
  • Speeding up WordPress with the ewww.io ExactDN CDN and Image Compression Plugin
  • Installing and managing WordPress with WP-CLI from the command line on Ubuntu
  • Moving WordPress to a new self managed server away from CPanel
  • Moving WordPress to a new self managed server away from CPanel

General

  • Backing up your computer automatically with BackBlaze software (no data limit)
  • How to back up an iPhone (including photos and videos) multiple ways
  • US v Huawei: The battle for 5G
  • Using the WinSCP Client on Windows to transfer files to and from a Linux server over SFTP
  • Connecting to a server via SSH with Putty
  • Setting web push notifications in WordPress with OneSignal
  • Infographic: So you have an idea for an app
  • Restoring lost files on a Windows FAT, FAT32, NTFS or Linux EXT, Linux XFS volume with iRecover from diydatarecovery.nl
  • Building faster web apps with google tools and exceed user expectations
  • Why I will never buy a new Apple Laptop until they fix the hardware cooling issues.
  • Telstra promised Fibre to the house (FTTP) when I had FTTN and this is what happened..

Copyright © 2022 · News Pro on Genesis Framework · WordPress · Log in

Some ads on this site use cookies. You can opt-out if of local analytics tracking by scrolling to the bottom of the front page or any article and clicking "You are not opted out. Click here to opt out.". Accept Reject Read More
GDPR, Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these cookies, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may have an effect on your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT