Search This Blog

Monday, September 30, 2013

Building faster pages with Web Workers | Developer Drive

Building faster pages with Web Workers | Developer Drive:

In recent years web pages and applications have been requiring the use of more, and increasingly complicated, JavaScript. Google Drive, for example, is a full blown desktop application which relies on JavaScript being responsive.
Although JavaScript performance has improved, large, complex scripts can still slow browsers down, or even cause them to freeze.
This is where Web Workers come in. They tell the browser to execute large, potentially problematic, scripts in the background so the user doesn’t have to deal with unresponsive pages. 

The limitations of JavaScript

JavaScript usually executes in a single thread, which means the browser works its way through the code, executing each statement at a time. The problem with this is that if a task takes a long time to complete it will freeze the browser, making the page unresponsive.
So what can Web Workers do to help? Because they provide a means to create new threads, Web Workers allow scripts to be written with a multi-threaded architecture.
Working this way, with multiple threads, ensures the page remains responsive even if there is a large chunk of Javascript to process.

Creating Web Workers 

Creating a Web Worker is quite a simple task. Place the script you want a Worker to execute in a separate .js file, then simply create a new Worker object containing the path of the .js file: 
var worker = new Worker('tasks.js');
Providing the path is correct, the browser will start a new Worker thread which will download asynchronously. If not, the worker will throw a silent error. 
The Worker is started by calling the postMessage() method:

Communicating with a Worker

Now you know how to spawn a new Web Worker, the next step is how to pass data between the Worker and the main application. This is done using the postMessage API to pass strings or JavaScript objects from one to the other:
worker.postMessage(‘This is the main application’);
This sends the enclosed string from the main application to the Worker, but in order for the Worker to ‘hear’ the message, we need to set up some event listeners in the Worker script file:
self.addEventListener(‘message’, function(e) {
self.postMessage(‘Main said: ‘ +;
}, false);
This tells the Worker to ‘listen’ for a message, then prepends ‘Main said:’ to the received message string and sends this back to the main application usingself.postMessage().
Now the main application file needs an event listener to receive the message sent to it from the Worker:
worker.addEventListener(‘message’, function(e) {
}, false);
This sets up the main application to receive a message from the Worker, and to display the ‘conversation’ as a browser alert:
This is the main application.
Main said: This is the main application.
Main said: This is the main application.

Terminating Workers 

Once the Worker has finished its task it should be terminated. This can be done either in the main application:
or they can terminate themselves by calling the close function:

Security and restrictions

Web Workers can be really helpful when building applications that rely heavily on Javascript, but they do have some limitations: 
  • The Worker scripts must be served from the same domain as the main application.
  • The Worker script runs outside the main file and so does not have access to the DOM.
  • A Web Worker will not work if called from inside a file system, so testing locally requires a server set up like LAMP or WAMP to be installed.


HTML5 continues to evolve and pleasantly surprise us. With Web Workers we have a way of creating large JavaScript heavy applications without the worry that the browser will not be able to cope.
The concept has been around for a couple of years, but unfortunately it hasn’t been used to it’s full potential yet.

Monday, September 23, 2013

How to get your app noticed before it dies | VentureBeat

How to get your app noticed before it dies | VentureBeat:

“There are two kinds of people in this world: the workers and the hustlers. The hustlers never work and the workers never hustle.”
Yes, I just quoted a line from “Cocktail.” It’s a movie that aligns well with the world of tech as we know it today. Being an entrepreneur today involves getting noticed in a hyper competitive environment — if you are not backed by Andreessen Horowitz or partnered with a celebrity investor like Ashton Kutcher, you are going to have to hustle to get noticed.
Nobody knows you, and they don’t know about your product. Where do you go from here?
If you’re building apps, that initial obstacle may seem especially steep. There are more than 900,000 apps in the Apple App Store and more than 850,000 apps available for Android. You have to smartly promote your app, otherwise you’re a drop in the ocean.
When my co-founder and I built our own app, we knew we had to stand out. That’s why we took my 1973 Volkswagen bus, which happened to be orange, and drove 2,000 miles for six weeks, at an average of 55 miles per hour.
We made sure to hit every major college between our starting point in Tulsa and our ending point in New York City. We combed through student unions to talk about our app. We even, thanks to a chance encounter, guest-lectured to MBA-level classes at the University of Alabama. By the end of our journey, we’d increased our user base by 40 percent.
We were still only two guys in an orange bus. Really, it’s not about manpower. It’s about the hustle. Here are some of the simple things that we did in order to get our app noticed from nothing.

Think Big & Own It

It’s been said that the Yankees always win because other teams can’t stop looking at the pinstripes on their uniforms. When you hold yourself with authority and speak with confidence, people perceive you as being important. It doesn’t matter whether you’re a brand-new company and nobody’s heard of your app.
At South by Southwest (SXSW), we deliberately presented ourselves as the new app to know about. We’d pitch our “hot new app” to everybody who looked like somebody. If a guy across the street was getting out of his Ferrari, we’d run across to pitch him. We made sure that anyone with a SXSW badge knew about us. The result? We pitched big names like Mark Cuban and Tim Ferriss face to face. We landed an hour long sit down with Shaq. All because we had our game faces on and we had the mindset that they should be talking to us. You have to believe in yourself and get excited if you want other people to get excited.

Find the gatekeepers

If you want to get to know someone famous, and it’s not possible to get in touch with them directly, target their gatekeepers first. At SXSW, Shaq held a contest where you could pitch him through the mobile video messaging app Tout. We submitted a pitch, but it went through late. We didn’t end up winning the contest.
When Shaq announced the winner, he pointed out two guys sitting in the front row, saying that he never made any decisions without them. We immediately realized that these men were Shaq’s gatekeepers. We approached them after the talk and told them we thought they’d made a mistake and would appreciate 30 seconds of their time, after a 5 minute discussion, we had our meeting with Shaq on the books. We targeted the gatekeepers, and it led us into the boardroom.

A rising tide lifts all boats

When you treat people politely from the get-go, you end up creating good memories for both yourself and the person you’re interacting with. If you ever need help yourself, they’ll remember you as a likeable person. If they don’t like you, they’ll think that helping you is waste of time.
When we spoke at the two entrepreneurship classes at the University of Alabama, it was win-win-win, all because we were happy to help a professor interested in having us speak. We provided him with a new teaching tool. The students got to hear about our firsthand experience as entrepreneurs, and we had the opportunity to tell the class about our product. We built good memories of the experience for everyone.

Getting it done

I’m not saying that driving around the country in an orange bus and pitching everyone you meet is the only way to promote your app. It’s just one of many ways to make your product stand out. When you have the confidence as though you’re the hottest new business to hit the market, meet important people either face-to-face or via their gatekeepers and always lend a helping hand, promotion will take care of itself.
My mantra? Always be closing — and never knock the hustle.

Read more at 

Wednesday, September 18, 2013

How We Did It: Millions of Daily Pageviews, 60% Less Server Resources « Build Internet

How We Did It: Millions of Daily Pageviews, 60% Less Server Resources « Build Internet:

Two years ago at One Mighty Roar we noticed that a side-project from the early days of the company was gaining large amounts of traffic, despite not being touched in ages. Over the process of a few months, we spent some 20% time, which quickly turned into 120% time, revamping You Rather, redoing the site from the ground up, creating an API, and writing both an iOS and an Android app. As a result, You Rather has done some excellent numbers, gained some community garnishment, and been a great project for the OMR team to boast about. But, as most side-projects are, they fall low on the priority list when other new opportunities come along.
At the end of this summer, it became our goal to give You Rather a breath of new life. The first step was to axe the aged Apache HTTP server in favor of Nginx. We’ve been using Nginx for 99% of our work over the last year and haven’t looked back since. However, most of our Nginx experience has been writing .conf files for new sites and servers, never rewriting old .confs for existing production sites.
In just an afternoon, we moved a site with 400+ active concurrent users doing 1k+ pageviews a second, from Apache to Nginx, without any downtime.

Brushing off the Dust

To give some background, we ♥ AWS. You Rather uses every bit of the AWS stack, from EC2 to EBS to Route 53 to S3. To get a “dev” environment setup for ourselves, we grabbed our current AMI of the You Rather site and spun up a new instance for us to hack on.
A simple yum install nginx got us Nginx up and running on our CentOS box in no time. Step one, complete.
To start, we tossed up our generic Nginx conf:
Lo and behold, most things…just worked. Granted, we had done plenty of work getting the AMI setup with with Apache and PHP initially, switching over to Nginx was pretty easy to get started. Step two, complete.

Tweaking for Performance

Nginx has a few obvious benefits over Apache when it comes to the server layer, but Nginx isn’t the sole reason for You Rather’s performance improvement. To see why, let’s clarify what exactly makes the difference here.
Where Nginx really shines is that it doesn’t process “.htaccess” files. While those files make for convenient setups on shared hosting plans or shared machines, traversing the file directory for those files occurs on each request, which gets expensive. Nginx, on the other hand, loads all configs in at launch and that’s it, you’re good to go.
Another place we saw had room for improvement was the interaction between our webserver and PHP. Our current implementation of You Rather used mod_php with Apache. Although the initial setup for Apache and mod_php was quick and easy, a big disadvantage to this is the way PHP is processed per request. Opting for PHP-FPM in favor of mod_php gave us significant performance boosts. Where as mod_php was interpreting PHP as a part of the Apache process, quickly soaking up large amounts of CPU and memory, PHP-FPM can be fine tuned to get great performance benefits. Utilizing PHP processes that are gracefully launched and killed, Unix sockets, and granular process management, PHP-FPM helped us tune back our overall resource usage on the box. On top of all of these tweaks, now we can tweak configurations for Nginx without affecting PHP-FPM and vice versa without breaking one or the other.
As one last golden bullet to tweak the performance of PHP, we added an opcode cache. Testing out Zend OPCache and APC, we found that OPCache kicked it out of the park for us, boosting PHP processing time and reducing memory consumption overall.
Step three, complete.

Sieging the Site

One thing we’ve gotten better at as a result of You Rather’s traffic is testing our apps under heavy load. On any given day, we’ve experienced large traffic spikes due to social media, aggregators (see: the Slashdot effect), or App Store features. Two tools we use a lot to test load handling are siege, and ab. For this setup, we mostly used siege.
Once we got Nginx serving our site up just right on our dev instance, it was time to hammer the box to see what it could handle. Using siege, we could see what kind of a load a single instance could handle. One great advantage of siege is its ability to hit an endpoint with concurrent connections, perfect for simulating a real-world use-case of You Rather. Starting the command:
We simulated 20 concurrent users ( -c 20) hitting the site on that instance, non-stop for a minute ( -t 1M). siege gives great analysis of the tests both during and afterwards. Things looked great from the get-go. The throughput was much lower than the old Apache AMI, and response times were generally lower. We kept tweaking the siege test, varying between 10 to 100 or more concurrent connections (protip: don’t go over 75 connections generally, things will break…), hitting different endpoints, like the user profile page, a specfic question’s page, and even the 404 page.
We compared the results from siege’ing the Nginx instance to a version of the current Apache site running on a control instance. In short, the Nginx instance performed 100% more transactions, with 50% less response time per transaction. Better yet, we watched the topon the Nginx box while testing this out. It handled it like a boss, barely topping out the CPU while slamming it with connections. Nginx was clearly giving the site the boost it needed.

Going Live

Using all of the glory that is AWS, we already had load balancers set up for the site, as well as auto-scaling groups and rules in place for killing unhealthy instances and spinning new ones up where needed. In our search for keeping the site available as much as possible, spinning up new instances under heavy load can get expensive.
Once we made a new AMI for the new deployment of the site, it was time to tweak the auto-scale group to spin up new instances from the new AMI. Using the AWS CLI, we just set the group to spin new instances up from the new AMI. Next, we set the number of desired instances for the group to a healthy number that we knew wouldn’t crash the site, leaving room for a mix of Apache and Nginx instances to be balanced side-by-side.
From here, we slowly killed off the old Apache instances one by one manually, letting the auto-scale group spin a new Nginx instance up in its place. Meanwhile, watching Google Analytics, we still had thousands of pageviews and API calls happening per second, live to the site, including the new Nginx boxes.
Finally, not a single Apache box was left load balanced, we started scaling back the number of desired instances for the group. From 4… to 3… to 2… We probably could have run it all off one box, but for the sake of our own sanity, 2 sounded right.
A week later, we had a bit of a post-mortem, analyzing the numbers. Guess where the Nginx revamp happened:
Our varying number of instances is more or less static now, and has been for weeks:

We have now been serving millions of pageviews and API calls off of two Nginx instances with 0% downtime for a solid three weeks now. Sorry Apache, there’s no looking back.