Archive for the ‘development’ Category:

Apache virtual hosts, HTTPS, and JIRA Docker Containers

September 8th, 2017


The goal was to easily create and recreate docker instances protected via SSL and accessed by simple URL. Below I explain how to map and in apache (used as firewall, reverse proxy, content server) to a specified IP:port of Jira and Bitbucket (git) docker container.


Docker is awesome, can easily create a new web service in minimal time.  However, in my case, I want everything to be routed through one machine’s https (port 443).  Additionally I wanted to setup Jira and Bitbucket (and possibly more). Previously I had to use to view my private repositories from a web browser.  I show how to do this with apache 2.4 and docker on a single Ubuntu Linux machine.

  • Security – single apache instance serves as reverse proxy and can force all HTTP requests to use HTTPS.
  • HTTPS certificates – use certbot by Let’s Encrypt to easily install certificates for HTTPS to work for free.
  • DNS and Virtual hosts – Assuming multiple domains or subdomains all get routed to same apache instance.  Will configure apache conf files to map these requests to correct port and path on docker container.
  • Creating / Starting JIRA and Bitbucket (git) docker containers to listen on a specific port

Note that you should replace everywhere used in this doc with your domain name.




Goal is for several domains,,,,, and, to resolve to the machine where apache will run.  There are many ways to do this, I have one A record mapping to an IP, and  CNAME records mapping the subdomains to

Apache Setup

I use Apache as the reverse proxy because of its popularity and my experience with it.  If performance is an issue, nginx is probably better.  Below you’ll find the important bits from these apache conf files:

  • apache2.conf
  • sites-enabled/
  • sites-enabled/
  • sites-enabled/
  • sites-enabled/000-default.conf
  • conf-available/
  • conf-available/vhost.logging.conf
> grep sites-enabled apache2.conf
IncludeOptional sites-enabled/*.conf
> cat /etc/apache2/sites-enabled/
<IfModule mod_ssl.c>
<VirtualHost *:443>


 DocumentRoot /var/www/

 Include conf-available/vhost.logging.conf
 Include conf-available/ 
> cat /etc/apache2/sites-enabled/
<IfModule mod_ssl.c>
<VirtualHost *:443>


 <Proxy *>
   Order allow,deny
   Allow from all
 ProxyRequests Off
 ProxyPreserveHost On
 # below must map to docker ip:port setup
 ProxyPass /
 ProxyPassReverse /

 Include conf-available/vhost.logging.conf
 Include conf-available/
> cat /etc/apache2/sites-enabled/
<IfModule mod_ssl.c>
<VirtualHost *:443>


 <Proxy *>
   Order allow,deny
   Allow from all
 ProxyRequests Off
 ProxyPreserveHost On
 # below must map to docker ip:port setup
 ProxyPass /
 ProxyPassReverse /

 Include conf-available/vhost.logging.conf
 Include conf-available/ 
> cat /etc/apache2/sites-enabled/000-default.conf
<VirtualHost *:80>

 DocumentRoot /var/www/html
 Include conf-available/vhost.logging.conf

 # Redirect http (port 80) to https (port 443)
 RewriteEngine on
 RewriteCond "%{SERVER_NAME}" ".*\$" [OR]
 RewriteCond %{SERVER_NAME}
 RewriteRule ^ https://%{SERVER_NAME}%{REQUEST_URI} [END,QSA,R=permanent]

> cat /etc/apache2/conf-available/
# added by certbot-auto
SSLCertificateFile /etc/letsencrypt/live/
SSLCertificateKeyFile /etc/letsencrypt/live/
Include /etc/letsencrypt/options-ssl-apache.conf
> cat /etc/apache2/conf-available/vhost.logging.conf
LogFormat "%{Host}i:%p %h %l %u [%{%d/%b/%Y %T}t.%{msec_frac}t %{%z}t] %{us}T \"%r\" %>s %b \"%{Referer}i\" \"%{User-Agent}i\"" vhost_combined2
CustomLog ${APACHE_LOG_DIR}/access.vhosts.log vhost_combined2
ErrorLog ${APACHE_LOG_DIR}/error.log


HTTPS Certificates

With, you can get free certificates from a Certificate Authority (CA). I used the cmd-line certbot to install them and update them. Initial setup can take up to 30 minutes, but every 3 months when you renew it should only take a few minutes. I do this on Ubuntu linux, but they have instructions for all the popular flavors of linux.  When done, you can verify your installed certificate using

One thing to note here is that its easiest to have a single certificate to cover domain and subdomains. In January 2018 wildcard certs will be supported, but till then you’ll need to start with something like this:

cd ssl-certs && ./certbot-auto --apache \
-d \
-d -d 

Docker Setup

Docker is the new cool kid on the block, and as such, it is constantly improving.  So what I write here may not be exactly what you need to do.  In any case, what I did is setup 3 docker containers – one for Jira, one for Bitbucket, and one for Postgres database.  If you don’t have experience setting up Jira or Bitbucket, it can be tricky, but Atlassian has pretty good documentation.

I have created a sample docker-compose.yml that covers whats needed on the docker side.  As previously mentioned, you will need to replace with your domain name.

Heroku, Node.js, Performance and profile_time

I’ve been using Heroku for about 10 months now, mostly with node.js. Recently one of our apps was using more web dynos than we thought it needed, so I looked into analyzing performance. I ended up writing my first node packaged module (npm) to solve my problem.

We have one Heroku app that is our main http server, using node.js, express, jade, less, and connect-assets to serve primarily html pages. This talks to a second Heroku app that we call our API server. The API server is also a node.js http server, using mongo as its database, serving json in standard RESTful way. The API server is fast – when the html server is under such load to need 10 web dynos, the api server can easily keep up with only 1 or 2 web dynos. My gut was asking me, are 10 web dynos necessary? And even with 10 web dynos, there were times when requests would timeout or other errors would trigger. Maybe some error or timeout has a huge ripple effect, slowing down things.

So the problem is not to just figure out where time is spent on any one particular request, but where time is spent on average across all requests and all web dynos for the html server. How much time is spent while communicating with the api server? doing some internal processing? or something else in the heroku black box?

The first step was educating myself on existing tools that could help me. We already use New Relic, which is awesome and I highly recommend it to anyone who uses Heroku. At the time New Relic support for node.js was still in beta (still is as of this writing), and one the features supported in other languages (like ruby) is the ability to use a New Relic api to track custom metrics. I thought this would be a great way to track down how much time, on average, is spent in various sections of our code.  Too bad it doesn’t work with node.js.

I considered other tools (like these) but the only one worth mentioning was nodetime. For us, nodetime was somewhat useful in that it offered details at levels outside of the application, such as stats on cpu, memory, OS, and http routing. This did not appear to solve the problem (I admit i didn’t read all their docs), but did provide some insight and some validation that things are setup as they should be based on documentation from Heroku and Amazon (Heroku runs on Amazon EC2).

However, nothing gave me what i needed – a high level way to see where time is spent. So I built profile_time (code on github). Here’s a description from the docs:

A simple javascript profiler designed to work with node and express, especially helpful with cloud solutions like heroku.

In node + express environment, it simply tracks the start and end times of an express request, allowing additional triggers to be added in between. All triggers, start, and end times are logged when response is sent.

Where this shines is in the processing of these logs. By reading all logs generated over many hours or days (and from multiple sources), times can be summed up and total time spent between triggers can be known. This is useful when trying to ballpark where most of node app time is spent.

For example, if 2 triggers were added, one before and one after a chunk of code, the total time spent in that chunk of code can be known. And more importantly, that time as percent of the total time is known, so it is possible to know how much time is actually being spent by a chunk of code before it is optimized.

In conclusiion, jade rendering was the culprit. More specifically, compiling less to css in jade on each request was really time consuming (around 200ms per request, which is HUGE). To summarize the jade issue:

!= css('bureau')

link(rel='stylesheet', href='.../bureau.css', type='text/css' )

Overall I am impressed with Heroku and am quite pleased by how easy it is to creatie, deploy, and monitor apps. Most of my apps have been in node, but also have run php, ruby, and python on apps as well. Heroku is not perfect, I would not recommend for serious enterprise solutions (details on that in another post). It’s great for startups or small businesses where getting stuff up and running fast and cheap is key.

Mobile: Native or Web App?

Before trying to build a mobile app, this should be the first question you should ask yourself.  And by native, I mean an app that runs on Android, iPhone, iPad, Windows Mobile, or Blackberry. And by web app, i mean something that runs in a mobile browser.

Short answer:  If you got deep pockets and lots of developers (like Facebook), and you want features HTML5 can not provide, go native.  But really it depends on what you’re trying to do and what resources you have.

The right answer only happens after goals have been identified, both short term and long term. This blog post will not cover all the details needed to answer the question, instead it will provide a few links that cover the details.  Note there is also a third option of building a hybrid apps (native apps that get the latest content from the web).


Trend towards HTML5 (aka web apps)

Still want native

Option 3: Hybrid

HTML5 Facebook Announcement, Sept 2012

For techies: The details

I hope the links above helped. Remember not to confuse whats best for your app with what other apps do or how they do it.  If you’re still not sure, one approach is to design a web app first, and if it doesn’t meet your needs (which should be fleshed out during the design phase), then go native.


Fluent Conference Wins

I just completed one of the best tech conferences i’ve ever been to – Fluent javascript conference in SF. O’Reilly did a great job of providing many opportunities to learn more about various facets of the javascript world. These include business, mobile, gaming, tech stacks, detailed in the very useful fluent schedule. There was also tons of buzz around web apps (code shared on client and server), backbone.js, node.js, among other things. It was well organized, with usually about 5 parallel sessions, and enough breaks to consolidate notes, meet other attendees, explore the exhibit hall, or just catchup with email. There was also a few organic meetups at night, but I did not make it to any of those.

I was happy to see discussion around business side of javascript, mainly due to the rise of web apps and HTML5. Even though javascript has been around for 17 years, only in the last few years has there been an explosion of js frameworks and libraries. This is partially attributed to mobile explosion, apple not supporting flash, and a really great dev community (yay github). With all these new tools available, companies can focus more on the bits they care about, allowing them to get new apps, features, and fixes in front of their users faster than ever. Web apps were a very popular discussion area, from the business and develpment side. Specifically two sessions highlighted this. First was how business are “Investing in Javascript” (pdf presentation by Keith Fahlgren from Safari Books. The other was by Andrew Betts from, discussing the financial time’s web app which allows users to view content offline. Most people know that traditional newspapers are dying, but I liked how Andrew points out “newspaper companies make money selling *content*, not paper”. Also Ben Galbraith and Dion Almaer from Walmart had a fun-to-watch Web vs Apps presentation (yes, its true, tech isn’t always DRY). The main takeway from them (which was echoed throughout the conference) was that web apps are better than native apps in most ways except one – native can sometimes provide a better user experience (but not always). Of course you may still want to build a native app using html5 and javascript, and there are 2 great ways that people do this, using Appcelerartor’s Titanium or phoneGap (now Cordova, apache open-source version). One of the coolest web apps I saw at the conference was from clipboard.comWatch Gary Flake’s presentation (better look out, pinterest).

For the uber techies out there, there were lots of insights on how companies successfully used various js libraries and frameworks (in other words, whats your technology stack). This is important to pay attention to, since not all the latest and greatest code is worthy to be used in production environments. You should think about stability, growth, documentation, and community involvement. Here’s a few bits I found interesting

  • Trello (which supports IE9+ only): Coffeescript, LESS, mustache templates, jquery/underscore/backbone
  • jquery, less, node.js
  • new soundcloud: infrastructure: node.js, uglify, requreJS, almondJS .. served using nginx. Runtime: backbone, Handlebars
  • twitter: less, jquery, node.js, more twitter tech stack
  • Riak, Redis, NGINX, jQuery, Node.js, node.js modules
  • pubnub: custom c process faster than memcached and redis
  • picplum tech stack: coffeescript, backbone.js, rails 3.2.3, unicorn + resque, heroku postgres, heroku (nginx), AWS couldfront & S3
  • stackmob: uses backbone, mongoDB, Joyent and EC2, Scala and Lift, Netty and Jetty

Finally, here are a few other cool tech-related tidbits from the conference. There was soo much good stuff, this is not a complete list, but just a few highlights from my notes

Summer Festivals on a Map

April 10th, 2011

Leaving Wedding Ceremony

Summer is finally here in Chicago. I woke up this morning and it was already 71 degrees out, giving me (and everyone else) a thirst for summer.  And one of the best things about summer in Chicago is all the street festivals.  In the past I added my favorite ones to my calendar.  This year I decided to go a bit further and I created a Google calendar called “Chad’s Chicago: Summer Festivals and More”.

The calendar includes every major summer festival in the Chicago area.  And as I say on the calendar subtitle, this is “Events Chad would do if he had the time: Summer Festivals, Burningman art and music, beer, outdoors, gardening, etc.”  Currently there are about 100 events listed for this summer – and I’m adding more every day. Great for people who want to explore different neighborhoods on different weekends, or people who want to hit more than one event in the same neighborhood on the same day, etc.

If you just want to look at the events, use the Chad’s Chicago – web browsing link. If you use google calendar already, you can subscribe to the calendar using this Chad’s Chicago – iCalendar link .  If you’ve never done that or forgot how, read the google help on subscribing to calendars.

The main reason I made this calendar was to see all the Festivals on a map.  We all know Chicagoland is a big place, and sometimes you just want to know about events in your neighborhood.  Well, now you can.  My GCM project (Google Calendar Map) puts all events from a google calendar on a google map. Whoa.  Tricky, eh?  Check it out for yourself – GCM: Chad’s Chicago on a map.

GCM on github

March 30th, 2011

Just a quick announcement for the geeks and developers out there regarding my GCM project:

I cleaned up some of the GCM code and put it on github as the mapfilter project, my first project on github. Hurray. I also updated the working GCM prototype with this updated. Most of the changes are under the hood, the biggest of which is that the core javascript functionality is now in its own file, cnMapFilter.js.

UI has not changed at all (I know, sad but true). However, one thing to note is that if you have firebug open and are poking around, you can now add a “debuglevel” parameter to the URL to dump tons of info to the console. Examples

That’s all for now!

GCM prototype 2

September 12th, 2010

Just a quick update to announce that I updated GCM, my Google Calendar Map project.  New things

  1. New GCM Homepage
  2. Old GCM prototype moved to gcm2009
  3. New features of GCM prototype (yes, still prototype)
    • New Date sliders
    • New “Warning” link to quickly fix problematic addresses
    • Reskinned to keep content a bit more tight (still needs work)

Please post any comments about GCM on the GCM Homepage.  Thanks!!

Google Calendar Map

June 12th, 2009

UPDATE 2010-9-12 Updated Prototype and New GCM Homepage – Leave new comments there!

As previously mentioned in my google maps mashup post, I love maps.  Due to the clean APIs and very fast response, many people have built  cool mashups using the maps API.  I even got paid to do one for landmarkfinder.  But recently I’ve been using google calendar alot, and with all the summer action here in Chicago I wanted an easier way to know where and when to go places.

Enter my new Google Calendar Maps Mashup. Basically it takes a google calendar and plots all the events on a map.  Well, at least all events that have a valid map address.  The mashup works with any google calendar, as long as its public. Basically you provide the URL of a google calendar XML Feed, my javascript code eats it up and spits out markers on the map and lists them on the right side.  The list is really a table, with sortable columns (sort by day, event name, address). The map acts like a filter – you only see events that occur on the map canvas.  For example, if the map is zoomed in to show downtown Chicago, you might only see events in grant park.  But if you zoom way out you will see events in north chicago or suburbs, too.  This is great when there is a calendar with tons of events going on all over the place.  If you only have a couple events or all the events are the same location, its not too exciting.    Check it out.

I got to know jquery and javascript a little better during this project  – I’m even planning on releasing my code as plugin.  Expect that in a week or two after i’ve cleaned it up and solidified the features and UI.

UPDATE 2010-9-12 Updated Prototype and New GCM Homepage – Leave new comments there!

Testing IE on a Mac

May 6th, 2009

I love my macbook pro, as most mac owners do, but if you develop web sites you need a good way to test IE, since 2/3 of the internet uses it (browser stats).  Since IE doesn’t run on MAC, you’ll need one of the following solutions:

  • VMWare Fusion 2 vs Parallels 4
    • Pros: IE running on a real Windows OS (2000, XP, Vista), fast and easy to use IE alongside mac apps (once setup – parallel note).
    • Differences: Both are very similar, Parallels 15% faster than Fusion (src), Fusion better (src2, src3)
    • Cons: Both around $80 (30 days free), You’ll need 2GB+ disk space to install a vmware OS
  • VirtualBox 2.2
    • Pros: Free version of VMWare Fusion and Parallels, from trusty Sun.  With “Guest Additions” Installed (instructions in user manual), works almost as well as Fusion 2.
    • Cons: Longer setup, flew glitches (src).
  • Bootcamp

    • Pros: Restart computer, booting into a real Windows install
    • Cons: must restart computer to test IE
  • Xenocode
    • Pros: run different instances of a program (ie6, ie7, and ie8)
    • Cons: only runs on windows, need fusion, parallels, or virtualBox
  • ie4osx
    • Pros: free
    • Cons: only intel mac, a little buggy, requires darwine and X11,
  • Other

I ended up using VMWare Fusion 2, since I had a copy of XP and liked using vmware in the past. Man, do i love it! Fustion 2 is much better than the older version – Installation was super easy. And once installed, you can run it 2 ways – all windows apps (IE7, Firefox, etc) running in one vmware-windows-xp mac application window, or switch to unity mode which lets windows apps (IE7, firefox) run on their own mac application window. I prefer the Unity way – the first time I ran IE the logo appeared on my mac dock and I chose to “keep on dock” to quickly launch and test in IE. Awesome.

In order to test IE6, IE7, and IE8, you can either create 3 vmware virtual machines (XP only likes one version of IE), or better yet, create one XP virtual machine with one version of IE and launch the other IE versions thru xenocode (must download/install spoon plugin, only runs on IE).  Overall IE on the mac this way is kinda slow, but so incredibly easy it makes up for it.

UPDATE: Virtualbox is working smoothly .. not as good as vmware, but good enough to not buy vmware once my free 30 days are finished.

UPDATE 2: Figured out a good way to debug javascript in IE – use Microsoft Visual Web Developer. Setup Instructions: – more options:

Happy Testing!

Google Maps Mashups

March 27th, 2009

I’ve always loved maps, and when google maps came out they raised the bar.  After they released their maps API and the mashups began.  The first cool one I remember was a craigslist mashup that listed all apartments for rent on a map – that’s huge when you’re new to a city, and invaluable in renting-competitive cities like NY and SF.  Now there are tons of mashups out there, and here’s a few that I’ve found recently.

Random mashups (coolest ones first)

Build your own map: (sorted by compete monthly usage 2/2008-2/2009 more)

Find more maps (reference)

What’s your favorite map?