Archive for the ‘javascript’ Category:

Fluent 2013

September 14th, 2013

The Fluent conference this year was just as good as Fluent 2012 last year, and bigger and better in some ways. I still loved how at then end you walkaway realizing how popular, important, diverse, and interesting javascript still is. I want to echo what I said in last year’s post, that I love seeing javascript used to solve real business problems, but the conference was much more than that.

Here are some of my highlights from the few talks I attended out of the 80-ish talks across 8 Topics: Doing Business on the Web Platform, The Server Side, Front End Frameworks and Libraries, HTML5 and Browser Technologies, Mobile, Pure Code and JavaScript, The Leading Edge, Tools, Platforms, and APIs

Noteworthy Speakers

  • Paul Irish – Google Chrome Advocate / Wiz [Wed 9:45a]
    • Insanely fast with chrome and sublime.
    • Talked about workflow, similar to 2012 (editor, browser, etc)
    • Chrome workspaces (in canary) – maps local files to what chrome gets over network
  • Sylvain Zimmer (french guy, scraping sites) [Wed 4:15p]
  • Eric Elliot – Adobe – [Wed 5:05p]
    • Classical Inheritance is Obsolete: How to Think in Prototypal OO
    • npm install stampit (stampit on github)
    • Classes deal with the idea of an object, Prototypes deal with the object itself.
  • Ariya Hidayat – Sencha – [Thu 11:50a]
    • Code quality tools are very important for good software dev
    • Feedback is important: tools <–> engineer
    • CI server, build tools, editors, code testing, code coverage (istanbul)
  • Sarah Mei – Pivotal – [ Thu 2:35p]
    • Emphasized importance of team communication ==> good code
    • One study said biggest predictor of good code was (in order)
      1. team communication, more so than
      2. tech qualifications
      3. experience with code base
    • Great book (for js, too): “Practical Object-Oriented Design in Ruby: An Agile Primer” by Sandi Metz
    • Loves pair programming.  good speaker
  • Brian Rinaldi (Adobe Systems)
    • NodeJS is more than just server js – offers lots of command line tools: UglifyJS, Grunt, GruntIcon, JSHint and HTTPster
  • Nick Zakas
    • Say Thank You.   author of 4 books.  Good speaker.

Other Takeaways

  • Websites need fast page load
    • About 250ms is google’s max page load time
    • 1000ms is max time users wait before mental context switch
    • 2000ms delay on bing/google SRP made 4.3% drop in revenue/user (HUGE)
  • Improving page load
    • Tie performance improvement goals with biz goals
    • Test your site with google’s pagespeed
    • Render Above The Fold (ATF) ASAP – inline critical css, logos, so browser can render at least part of page fast.
  • SPDY
    • HTTP 2 today, avail in chrome, ff, opera (no IE), apache, node, nginx
    • Requires TLS (HTTPS)
    • Many big sites adopted in 2012: some google, facebook, twitter, wordpress, cloudfare
    • Really only need 2 domains to serve images (google says 10 images is enuf)
  • Refactoring JS – lots of tips
  • Games
    • asm.js is da shit (runs C code in js)
    • Brendan Eich (Mozilla) demo’d game “Unreal”, ported to js/html5
  • Google Glass
    • Android, like TV. camera, geolocation
    • Simulator on github, mirror API
    • Avail in 2014, $200-$600

 

Heroku, Node.js, Performance and profile_time

April 11th, 2013

I’ve been using Heroku for about 10 months now, mostly with node.js. Recently one of our apps was using more web dynos than we thought it needed, so I looked into analyzing performance. I ended up writing my first node packaged module (npm) to solve my problem.

We have one Heroku app that is our main http server, using node.js, express, jade, less, and connect-assets to serve primarily html pages. This talks to a second Heroku app that we call our API server. The API server is also a node.js http server, using mongo as its database, serving json in standard RESTful way. The API server is fast – when the html server is under such load to need 10 web dynos, the api server can easily keep up with only 1 or 2 web dynos. My gut was asking me, are 10 web dynos necessary? And even with 10 web dynos, there were times when requests would timeout or other errors would trigger. Maybe some error or timeout has a huge ripple effect, slowing down things.

So the problem is not to just figure out where time is spent on any one particular request, but where time is spent on average across all requests and all web dynos for the html server. How much time is spent while communicating with the api server? doing some internal processing? or something else in the heroku black box?

The first step was educating myself on existing tools that could help me. We already use New Relic, which is awesome and I highly recommend it to anyone who uses Heroku. At the time New Relic support for node.js was still in beta (still is as of this writing), and one the features supported in other languages (like ruby) is the ability to use a New Relic api to track custom metrics. I thought this would be a great way to track down how much time, on average, is spent in various sections of our code.  Too bad it doesn’t work with node.js.

I considered other tools (like these) but the only one worth mentioning was nodetime. For us, nodetime was somewhat useful in that it offered details at levels outside of the application, such as stats on cpu, memory, OS, and http routing. This did not appear to solve the problem (I admit i didn’t read all their docs), but did provide some insight and some validation that things are setup as they should be based on documentation from Heroku and Amazon (Heroku runs on Amazon EC2).

However, nothing gave me what i needed – a high level way to see where time is spent. So I built profile_time (code on github). Here’s a description from the docs:

A simple javascript profiler designed to work with node and express, especially helpful with cloud solutions like heroku.

In node + express environment, it simply tracks the start and end times of an express request, allowing additional triggers to be added in between. All triggers, start, and end times are logged when response is sent.

Where this shines is in the processing of these logs. By reading all logs generated over many hours or days (and from multiple sources), times can be summed up and total time spent between triggers can be known. This is useful when trying to ballpark where most of node app time is spent.

For example, if 2 triggers were added, one before and one after a chunk of code, the total time spent in that chunk of code can be known. And more importantly, that time as percent of the total time is known, so it is possible to know how much time is actually being spent by a chunk of code before it is optimized.

In conclusiion, jade rendering was the culprit. More specifically, compiling less to css in jade on each request was really time consuming (around 200ms per request, which is HUGE). To summarize the jade issue:

JADE BAD:
!= css('bureau')

JADE GOOD:
link(rel='stylesheet', href='.../bureau.css', type='text/css' )

Overall I am impressed with Heroku and am quite pleased by how easy it is to creatie, deploy, and monitor apps. Most of my apps have been in node, but also have run php, ruby, and python on apps as well. Heroku is not perfect, I would not recommend for serious enterprise solutions (details on that in another post). It’s great for startups or small businesses where getting stuff up and running fast and cheap is key.

Fluent Conference Wins

June 4th, 2012


I just completed one of the best tech conferences i’ve ever been to – Fluent javascript conference in SF. O’Reilly did a great job of providing many opportunities to learn more about various facets of the javascript world. These include business, mobile, gaming, tech stacks, detailed in the very useful fluent schedule. There was also tons of buzz around web apps (code shared on client and server), backbone.js, node.js, among other things. It was well organized, with usually about 5 parallel sessions, and enough breaks to consolidate notes, meet other attendees, explore the exhibit hall, or just catchup with email. There was also a few organic meetups at night, but I did not make it to any of those.

I was happy to see discussion around business side of javascript, mainly due to the rise of web apps and HTML5. Even though javascript has been around for 17 years, only in the last few years has there been an explosion of js frameworks and libraries. This is partially attributed to mobile explosion, apple not supporting flash, and a really great dev community (yay github). With all these new tools available, companies can focus more on the bits they care about, allowing them to get new apps, features, and fixes in front of their users faster than ever. Web apps were a very popular discussion area, from the business and develpment side. Specifically two sessions highlighted this. First was how business are “Investing in Javascript” (pdf presentation by Keith Fahlgren from Safari Books. The other was by Andrew Betts from labs.ft.com, discussing the financial time’s web app which allows users to view content offline. Most people know that traditional newspapers are dying, but I liked how Andrew points out “newspaper companies make money selling *content*, not paper”. Also Ben Galbraith and Dion Almaer from Walmart had a fun-to-watch Web vs Apps presentation (yes, its true, tech isn’t always DRY). The main takeway from them (which was echoed throughout the conference) was that web apps are better than native apps in most ways except one – native can sometimes provide a better user experience (but not always). Of course you may still want to build a native app using html5 and javascript, and there are 2 great ways that people do this, using Appcelerartor’s Titanium or phoneGap (now Cordova, apache open-source version). One of the coolest web apps I saw at the conference was from clipboard.comWatch Gary Flake’s presentation (better look out, pinterest).

For the uber techies out there, there were lots of insights on how companies successfully used various js libraries and frameworks (in other words, whats your technology stack). This is important to pay attention to, since not all the latest and greatest code is worthy to be used in production environments. You should think about stability, growth, documentation, and community involvement. Here’s a few bits I found interesting

  • Trello (which supports IE9+ only): Coffeescript, LESS, mustache templates, jquery/underscore/backbone
  • just.me: jquery, less, node.js
  • new soundcloud: infrastructure: node.js, uglify, requreJS, almondJS .. served using nginx. Runtime: backbone, Handlebars
  • twitter: less, jquery, node.js, more twitter tech stack
  • clipboard.com: Riak, Redis, NGINX, jQuery, Node.js, node.js modules
  • pubnub: custom c process faster than memcached and redis
  • picplum tech stack: coffeescript, backbone.js, rails 3.2.3, unicorn + resque, heroku postgres, heroku (nginx), AWS couldfront & S3
  • stackmob: uses backbone, mongoDB, Joyent and EC2, Scala and Lift, Netty and Jetty

Finally, here are a few other cool tech-related tidbits from the conference. There was soo much good stuff, this is not a complete list, but just a few highlights from my notes

GCM on github

March 30th, 2011

Just a quick announcement for the geeks and developers out there regarding my GCM project:

I cleaned up some of the GCM code and put it on github as the mapfilter project, my first project on github. Hurray. I also updated the working GCM prototype with this updated. Most of the changes are under the hood, the biggest of which is that the core javascript functionality is now in its own file, cnMapFilter.js.

UI has not changed at all (I know, sad but true). However, one thing to note is that if you have firebug open and are poking around, you can now add a “debuglevel” parameter to the URL to dump tons of info to the console. Examples

That’s all for now!