Search This Blog

Wednesday, October 30, 2013

How to Squeeze Two Days of Work into One |

How to Squeeze Two Days of Work into One |

For any entrepreneur, especially those with other responsibilities, like school, a day job or family, being a master of time management is a necessity. 
As a second-year student at Babson College in Wellesley, Mass., I am constantly juggling the challenges of schoolwork, while simultaneously trying to get my media-publishing website Back to Black off the ground. 
For a while, it felt like there are were not enough hours in a day, weeks in a month, and months in a year to do it all. Tasks, big and small, became so overwhelming, procrastination seemed like the best alternative. I knew in order to make it all work, I had to get my act together. So I became organized and starting making every second count. 
For those struggling with the balancing act, here are some time management tips I learned: 
Have a to-don’t list. All my life I’ve heard everyone say, “You need a to-do list.” How about a to-don’t list? Same concept as a to-do list, but instead, you write down all things you should not be doing. We all have our triggers that can make it tempting to stop doing our work. Some people fall prey to friends wanting to hang out, while others get distracted with social media. 
What should a to-don’t list include? I recommend writing down all of the people, places and things that deter you from your goals and devise a strategy for dodging these distractions. Building this list is going to challenge you to be both self-aware and honest with yourself but will help you stay focused.
There are apps for everything these days and time management is no exception. If you need to block out distracting websites try Chrome's StayFocused. If you just get distracted with social media sites like Twitter or Facebook, you could try Anti-Social.You can also monitor your productivity levels while on the computer withRescueTime.
Know your responsibilities. Everyone has roles in life -- be it a student, parent or an artist. You need to make sure you define your role and what it means to you and others. By doing so, you understand how to leverage your decision-making process to become more advanced in those roles. 
For example, my roles include being a student, aspiring entrepreneur, mentor and head of civic engagement for my black student union. With so little time, I make sure I have a clear definition of what these roles entail, so I know not to accept offers that don't correlate with my objectives.
If you say yes too much then all you are doing is spreading yourself thin on things that will not help you grow. On the contrary, accepting opportunities that align with the things you do can be fulfilling and perhaps open doors for the future. That said, make sure you monitor how many offers you accept, so you do not burn out.
Utilize SMART goals. 
Contrary to popular belief, you don’t need to be involved in a bunch of tasks to be deemed successful. You just need to be smart about how you use your time, making sure activities are relevant and will advance your roles and goals. 
I follow the advice I learned in school and approach each task using the SMART method. Every task I set needs to be specific to a role, is measurable, be attainable considering my resources, is realistic pertaining to my schedule and has a set completion time.
Purchasing a project planner or storing your tasks on an e-calendar can help you stay organized. 
As an entrepreneur, you should do things to advance your goals every day. There will be times you fall off track, but when that happens, just dust yourself off and continue doing what you need to do. 

Read more:

10 Qualities Every Leader of The Future Needs to Have |

10 Qualities Every Leader of The Future Needs to Have |

The reigning theory in business has long been that "alpha" leaders make the best entrepreneurs. These are aggressive, results-driven achievers who assert control and insist on a hierarchical organizational model. Yet I am seeing increasing success from "beta" startup cultures where the emphasis is on collaboration, curation and communication.
Some argue that this new horizontal culture is being driven by Gen-Y, whose focus has always been more communitarian. Other business culture experts, like Dr. Dana Ardi, in her new book The Fall of the Alphas, argue that the rise of the betas is really part of a broader culture change driven by the Internet -- emphasizing communities, instant communication and collaboration.
Can you imagine the overwhelming growth of Facebook, Wikipedia and Twitter in a culture dominated by alphas? This would never happen. I agree with Ardi who says most successful workplaces of the future need to adopt the following beta characteristics and better align themselves with the beta leadership model:
1. Do away with archaic command-and-control models. Winning startups today are horizontal, not hierarchical. Everyone who works at an organization feels they're part of something, and moreover, that it's the next big thing. They want to be on the cutting-edge of technology.
2. Practice ego management. Be aware of your own biases and focus on the present as on the future. You need to manage the egos of team members by rewarding collaborative behavior. There will always be the need for decisive leadership, particularly in times of crisis. I'm not suggesting total democracy.
3. Stress innovation. Betas believe that team members need to be given an opportunity to make a difference -- to give input into key decisions and communicate their findings and learnings to one another. Encourage team-members to play to their own strengths so that the entire team and organization leads the competition.
4. Put a premium on collaboration and teamwork. Instead of knives-out competition, these companies thrive by building a successful community with shared values. Team members are empowered and encouraged to express themselves. The best teams are hired with collaboration in mind. The whole is thus more than the sum of its parts.
5. Create a shared culture. Leadership is fluid and flexible. Integrity and character matter a lot. Everyone knows about the culture. Everyone subscribes to the culture. Everyone recognizes both its passion and its nuance. The result looks more like a symphony orchestra than an advancing army.
6. Be ready for roles and responsibilities to change weekly, daily and even hourly. One of the big mistakes entrepreneurs make is they don't act quickly enough. Markets and needs change fast. Now there is a focus on social, global and environmental responsibility. Hierarchies make it hard to adjust positions or redefine roles. The beta culture gets it done.
7. Temper confidence with compassion. Mindfulness, of self and others, by boards, executives and employees, may very well be the single most important trait of a successful company. If someone is not a good cultural fit or is not getting their job done, make the change quickly, but with sensitivity.
8. Invite employees to contribute. The closer everyone in the organization comes to achieving his or her singular potential, the more successful the business will be. Successful cultures encourage their employees to keep refreshing their toolkits, keep flexible, keep their stakes in the stream.
9. Stay diverse. Entrepreneurs build teams. They don't fill positions. Cherry-picking candidates from name-brand universities will do nothing to further an organization and may even work against it. Don't wait for the perfect person -- he or she may not exist. Hire for track record and potential.
10. Not everyone needs to be a superstar. Superstars don't pass the ball, they just shoot it. Not everyone wants to move up in an organization. It's perfectly fine to move across. Become your employees' sponsor -- on-boarding with training and tools is essential. Spend time listening. Give them what they need to succeed.
Savvy entrepreneurs and managers around the world are finding it more effective to lead through influence and collaboration, rather than relying on fear, authority and competition. This is rapidly becoming the new paradigm for success in today's challenging market. Where does your startup fit in with this new model?

Read more:

Debugging PHP applications with HHVM | labs @

Debugging PHP applications with HHVM | labs @

In the previous parts of this series we got you started with HHVM and showed how we could get the symfony standard edition running on HHVM. This time we will dive deeper into HHVM by using it to debug our application.
For most people the easiest way of debugging a PHP application is to place var_dump() and die() statements all over the code. Another option is installing xdebug, which has gotten a lot easier nowadays due to IDE integrations.
In this blog post we'll show you how to debug your PHP application using HHVM. We describe how you can step through your program, set and manage your breakpoints, how to inspect variables and take a peek at helpful features like conditional breakpoints.
Note: for all examples in this post we use the HHVM 2.2.0 precompiled binaries installed on a vagrant box running Ubuntu 12.04 (apt-get installable!).

A faulty program

Let's start off by creating a "faulty" program we can debug. Create a simple script and save it as example.php:
$a = 4; $b = 2;
echo divide($a, $b) . "\n";
$a = 5; $b = 0;
echo divide($a, $b) . "\n";

function divide($a, $b)
    return $a / $b;
Running this example with hhvm example.php prints a warning. We will try to find the bug using the hhvm debugger.

Firing up the debugger

Start with running hhvm in debug mode using the following command:
hhvm --mode debug example.php
HHVM starts and loads the program, but does not execute it yet. HHVM is waiting for a command. Run the example by giving the runcommand.
Running the HHVM debugger

Stepping through the program

A first approach to debugging this program would be to walk through each step of the execution. With next we can do exactly that. This time start the program using the continue command. This loads our program and pauses it just before the first line of code. Use thenext command to execute the program one line at the time.
Continue/next in the HHVM debugger
You may have noticed that the debugger did not step into the divide() function. To check out what is going on in divide(), we can use thestep command as soon as we reach line 3. step lets us step into a function.
Step into a function with the HHVM debugger
The counterpart of step is the out command. It can be used to continue running the program until it gets to the point where the function was called.

Our first breakpoint

Stepping through an entire program by hand is a bit cumbersome. In fact we want HHVM to pause the script when it enters the divide() function. This can be done by setting a breakpoint. A breakpoint is an intentional stop or pause placed in a program. In HHVM we set a breakpoint with the break command.
break divide() # break when the divide function is called
This will set a new breakpoint when HHVM enters the divide function. Start the program again with the run command. HHVM now pauses when it hits the breakpoint.
Breaking on a function in the HHVM debuggegr
To continue the execution give the continue command. The program resumes execution and breaks again when the breakpoint is hit a second time. continue the execution. Now the program gives the warning and finally it ends normally.
Other possibilities for setting breakpoints include:
break example.php:9 # break on line 9 of example.php
break Math::divide() # break when the divide function of the Math class is called

Inspecting variables

When our program hits a breakpoint and pauses execution, we can inspect the value of all variables in the current scope by using theprint command. Run the program again (run). When it breaks, inspect the value of $a and $b with the following commands.
print $a
print $b
Inspecting variables with the HHVM debugger
Continue (continue) the program and inspect $a and $b on the second break. We will notice $b has a value of 0. When we divide a number by 0, we will get a "Division by zero" warning.

Conditional breakpoints

Now we know what causes the problem, we have to find out where $b is set to 0. First we need to execute the program until it enters the divide function with $b being 0. One approach is to inspect $b every time it enters divide().
In this case divide() is called only two times, but imagine a situation where this function is called 1337 times. You definitely don't want to inspect and continue that many times to find a situation where $b is equal to 0. Conditional breakpoints to the rescue!
conditional breakpoint only breaks the program when a certain condition is met. The syntax in hhvm is break <location> if <condition>.
In our case we're interested in the situation where $b is equal to 0. Start of by clearing the previously set breakpoints using:
break clear all
Then set a conditional breakpoint with:
break divide() if $b == 0
Run the program and notice that HHVM doesn't break until $b is equal to 0. Continue to see the program error out again.
Conditional breakpoints with the HHVM debugger

Getting a trace

Now we have found a point in our program where $b is equal to 0, we want to know from where divide() was called, so we can take a look at that piece of code to find out the reason why $b is equal to 0.
HHVM gives you a stack trace of the current breakpoint when you use the where command. Run the program again, and use where as the debugger breaks the program execution.
Getting a stack trace using the HHVM debugger

Managing breakpoints

If we place a lot of breakpoints we can lose track of all the breakpoints we have set. HHVM provides a list of all breakpoints with thebreak list command.
All breakpoints are given a number (e.g. 1), making it possible to remove a breakpoint (break clear 1) or temporarily disable one or all breakpoints with the commands:
break disable 1
break enable 1
break disable all
break enable all
For a complete overview of all the possibilities use the break help command.

Wrap up

In this post we've introduced you to debugging with HHVM. In the next blog post we'll move from debugging a cli program to debugging web requests!
The HHVM debugger has been quite solid for us so far. The only improvement we could come up for now would be vim integration!

Facebook Admits Teen Use May Be Declining

Facebook Admits Teen Use May Be Declining:

Many have claimed that Facebook's hold on the teenage demographic has been slipping. Facebook's earnings call on Wednesday did nothing to squelch these claims.
Facebook CFO David Ebersman said that while monitoring teen usage is a challenge, daily use among some may be declining.
"Youth usage among U.S. teens was stable overall from Q2 to Q3, but we did see a decrease in daily users partly among younger teens," Ebersman said.
Ebersman's definition of "younger teens" is unclear, as Facebook does not break out its user total by age group. But the comment does speak to the challenge Facebook has faced in retaining the attention of young consumers who are using other social sites like Snapchat, Twitter and even its own Instagram.
Younger teens may refer to Facebook users between 13 and 17 years old, an age group that Facebook has spent time focusing on in recent weeks.
Two weeks ago, Facebook changed its policy to allow teenagers in that age demographic to post publicly on the site. Previously, teens 13-17 could only share with friends and friends of friends, but now they can share with any users on Facebook.
"Teens are among the savviest people using of social media, and whether it comes to civic engagement, activism, or their thoughts on a new movie, they want to be heard," Facebook wrote in a blog post about the change. "While only a small fraction of teens using Facebook might choose to post publicly, this update now gives them the choice to share more broadly, just like on other social media services."
Facebook's stock was up by more than 15% after hours following the earnings release, but dipped after the comment about teens. It's now flat.

Tuesday, October 22, 2013

Build a mobile BigData strategy around GoodData - TechRepublic

Build a mobile BigData strategy around GoodData - TechRepublic:

Take a look at GoodData&rsquo;s mobility strategy, which gives mobile users two options for accessing GoodData dashboards.

Mobile devices, the iPad in particular, offer a next-generation dashboard for busy executives and other mobile workers who need to tap into Big Data repositories. GoodData, a cloud business intelligence startup, is offering customers two methods for tapping into corporate data that they store in the GoodData platform.

Using the HTML 5 app

Hubert Palan, GoodData's VP of Product Management, explained in an email that their overall mobile strategy focuses on an HTML 5 app, a browser native application they have right now with an optimized user interface for touch devices. The full GoodData functionality is available across mobile platforms. The company is investing development time and resources into additional touch screen optimized UI components, with offline access coming available in the first half of 2014. Figure A shows the GoodData HTML 5 app running through Chrome on an Android tablet.
Figure A
Figure A
HTML 5 app access via Chrome on an Android tablet.
Figure B is an example of using the HTML 5 app on an iPad:
Figure B
Figure B
HTML 5 app access on an iPad.

Using the GoodData app on your iPad

The GoodData app is free from the App Store, and it’s optimized for viewing dashboards. However, users will need to have a user account in their organization’s GoodData cloud platform.
Going the iPad app route for a Big Data dashboard is a great method for democratizing mobile access across an organization, because it’s such a consumer-like experience. This is an ideal tool for executives, mobile sales people, and knowledge workers that an organization is cultivating for an internal Big Data team.
GoodData enables you to access your dashboards from anywhere you have connectivity. It packs some powerful features for mobile access to Big Data reporting, including:
  • Dashboards and reports
  • Tracking of metrics and key performance indicators (KPIs)
  • Ad hoc analysis
  • Collaboration and sharing
I tested the GoodData app on my iPad using a GoodData trial account. When you login to your GoodData account, you'll see the Projects screen (Figure C).
Figure C
Figure C
The Projects screen.
From the Projects screen, I tapped on the Good Marketing demo. The screen that opens is testimony as to why the iPad is a better dashboard (Figure D):
Figure D
Figure D
Lead to Revenue dashboard.
This is a well-executed report that doesn’t require having to task a programmer every time an executive wants a report run. With the GoodData app, you have pinch and zoom control over what you’re viewing. You can’t edit data, but this iPad dashboard beats any internally-developed dashboard for presentation and productivity power over the old ways of doing things. Figure Eshows an example of a Pipeline Analysis report.
Figure E
Figure E
Pipeline Analysis report.
Figure F shows an example of a Sales Management breakdown report.
Figure F
Figure F
Sales Management breakdown report.
When you tap on the Total Lost tab, you can drill down into a Total Lost report (Figure G).
Figure G
Figure G
Total Lost report.
There are no configuration controls available in the GoodData app, so you don’t have to concern yourself with an executive or other end user getting themselves into trouble with configuration or security settings.
Overall, I liked the GoodData app. However, it frequently crash on my iPad running iOS 7. I'm sure that GoodData will deliver an iOS 7 update to their app, but at the time of this writing, that hasn't happened. Regardless, this app is a definite candidate for some of the new iOS 7 features, including per app VPN, SSO, and the remote configuration of managed apps.

Put the GoodData mobile strategy to work for your enterprise

The GoodData mobile strategy is basic yet flexible, and it can enable an enterprise standardized on GoodData for Big Data and Business Intelligence (BI) to open mobile access to authorized users across their organization. Benefits include:
  • Android tablet users can access their GoodData account through the Android browser
  • Less technical users can be directed to use the GoodData app on an iPad to access their GoodData account
  • Less IT department time spent on producing reports for executive management and others
  • Secure access to data reporting and dashboards in the cloud through the users' GoodData account
  • Big Data teams can setup reports for knowledge workers and deploy them out to the iPad app for consumption
They cover all the bases in their mobile strategy, which is important because many organizations are still trying to implement a mobile BI or Big Data strategy, and staying with one vendor is a saner option when it comes to support and integration costs.


GoodData’s mobile strategy -- and their iPad app, in particular -- show that the convergence of Big Data and mobility is upon us. It’s up to the enterprise to use this convergence to equip their executives, sales teams, and knowledge workers for a competitive advantage through anytime/anywhere data access

Monday, October 21, 2013

Using the memcached telnet interface

Using the memcached telnet interface: "Memcache Telnet Interface"

Memcache Telnet Interface

This is a short summary of everything important that helps to inspect a running memcached instance. You need to know that memcached requires you to connect to it via telnet. The following post describes the usage of this interface.

How To Connect

Use "ps -ef" to find out which IP and port was passed when memcached was started and use the same with telnet to connect to memcache. Example:
telnet 11211

Supported Commands

The supported commands (the official ones and some unofficial) are documented in the doc/protocol.txtdocument.
Sadly the syntax description isn't really clear and a simple help command listing the existing commands would be much better. Here is an overview of the commands you can find in the source (as of 16.12.2008):
getReads a valueget mykey
setSet a key unconditionallyset mykey 0 60 5
addAdd a new keyadd newkey 0 60 5
replaceOverwrite existing keyreplace key 0 60 5
appendAppend data to existing keyappend key 0 60 15
prependPrepend data to existing keyprepend key 0 60 15
incrIncrements numerical key value by given numberincr mykey 2
decrDecrements numerical key value by given numberdecr mykey 5
deleteDeletes an existing keydelete mykey
flush_allInvalidate specific items immediatelyflush_all
Invalidate all items in n secondsflush_all 900
statsPrints general statisticsstats
Prints memory statisticsstats slabs
Prints memory statisticsstats malloc
Print higher level allocation statisticsstats items
stats detail
stats sizes
Resets statisticsstats reset
versionPrints server version.version
verbosityIncreases log levelverbosity
quitTerminate telnet sessionquit

Traffic Statistics

You can query the current traffic statistics using the command
You will get a listing which serves the number of connections, bytes in/out and much more.
Example Output:
STAT pid 14868
STAT uptime 175931
STAT time 1220540125
STAT version 1.2.2
STAT pointer_size 32
STAT rusage_user 620.299700
STAT rusage_system 1545.703017
STAT curr_items 228
STAT total_items 779
STAT bytes 15525
STAT curr_connections 92
STAT total_connections 1740
STAT connection_structures 165
STAT cmd_get 7411
STAT cmd_set 28445156
STAT get_hits 5183
STAT get_misses 2228
STAT evictions 0
STAT bytes_read 2112768087
STAT bytes_written 1000038245
STAT limit_maxbytes 52428800
STAT threads 1

Memory Statistics

You can query the current memory statistics using
stats slabs
Example Output:
STAT 1:chunk_size 80
STAT 1:chunks_per_page 13107
STAT 1:total_pages 1
STAT 1:total_chunks 13107
STAT 1:used_chunks 13106
STAT 1:free_chunks 1
STAT 1:free_chunks_end 12886
STAT 2:chunk_size 100
STAT 2:chunks_per_page 10485
STAT 2:total_pages 1
STAT 2:total_chunks 10485
STAT 2:used_chunks 10484
STAT 2:free_chunks 1
STAT 2:free_chunks_end 10477
STAT active_slabs 3
STAT total_malloced 3145436
If you are unsure if you have enough memory for your memcached instance always look out for the "evictions" counters given by the "stats" command. If you have enough memory for the instance the "evictions" counter should be 0 or at least not increasing.

Which Keys Are Used?

There seems to be no builtin function to determine the currently set keys. However you can use the
stats items
command to determine how many keys do exist.
stats items
STAT items:1:number 220
STAT items:1:age 83095
STAT items:2:number 7
STAT items:2:age 1405
This at least helps to see if any keys are used. To dump the key names from a PHP script that already does the memcache access you can use the PHP code from

Never Set a Timeout > 30 Days!

While this has nothing to do with the telnet access this is a problem you might run into. If you try to "set" or "add" a key with a timeout bigger than the allowed maximum you might not get what you expect because memcached then treats the value as a Unix timestamp. Also if the timestamp is in the past it will do nothing at all. Your command will silently fail.
So if you want to use the maximum lifetime specify 2592000. Example:
set my_key 0 2592000 1

Disappearing Keys on Overflow

Despite the documentation saying something about wrapping around 64bit overflowing a value using "incr" causes the value to disappear. It needs to be created using "add"/"set" again.

Memcached eviction prior to key expiry? - Stack Overflow

Memcached eviction prior to key expiry? - Stack Overflow:

Can a key/value pair stored in memcached get evicted prior to its expiry if there is still free space available?

Basically, memcache allocates space in chuncks vs on-demand, and then stores items into the chunks and manages that memory manually. As a result, smaller items can "use" much larger pieces of memory than they would if space was allocated on a per-item basis.
The link explains it much better than I can
Edit: adding more explanation
Memcache works by allocating slabs of various sizes. These slabs have a number of specifically sized slots (which is determined by the slab's class).
Hypothetically (and using only with my abstraction of Memcache's internals), lets say the smallest size slab class was 1K. This means that the smallest slots are 1K. Furthermore, memcache will only allocate these in sets of 1024, or 1MB of memory at a time. Lets say we had such such a configuration and we want to stor a 1-byte object (char value?) into Memcache. Lets suppose this would require 5 bytes of memory (4 byte key?). In an empty cache, Memcache would allocate a new slab of the smallest size that can hold the value (1K slots). So storing your 5 bytes will cause memcache to allocate 1MB of memory.
Now, let say you have a lot of these. The next 1023 will be "free" -- Memcache has already allocated the memory, so no additional memory is needed. At the end of this, you've stored 1024 * 5 bytes = ~5KB, but Memcache has used 1MB to store this. Store a few million of these, and you can imagine consuming gigabytes of memory to store kilobytes of data.
This is a worst case. In practice Memcache can be configured to have a minimum slab class size quite small if needed, and the growth factor (size difference between the slab-classes) can be widened or narrowed. If you're caching database queries, you might have items sized from a few bytes to several KB, with page content you could even get into the MB.
Here's the key point Memcache won't reclaim memory or clean up slabs (new versions do have this now for a pretty significant performance hit, but traditionally, this has been how Memcache works).
Suppose you have a system that has been happily running and caching for a few days. You have hundreds of slabs of various sizes. You deploy a new page-caching strategy to your app without resetting the cache. Now instead of caching whole pages, you're caching parts of the page. You've changed your caching pattern from storing lots of ~1MB objects to storing lots of ~10KB objects. Here's where we get into trouble. Memcache has allocated a bunch of slabs that hold objects of about 1MB. You never used to cache many 10KB objects before. The slabs that have 10KB slots are quickly filled up, but now you have a whole bunch of allocated slabs that hold objects of 1MB which aren't being used (nothing else is that big). Memcache won't put your 10KB objects in a 1MB slot (even if it did, it wouldn't help for very long). It needs to get more slabs that hold 10KB objects, but it can't because all your memory has been allocated into the slabs that hold 1MB objects. The result is that you are left with potentially gigabytes of memory allocated in slabs to hold 1MB objects which sit idle while your 10KB-slot slabs are full. In this scenario, you will start evicting items out of the 10KB-slot slabs despite have gigabytes sitting idle.
This was a long-winded, contrived, and extreme example. Rarely does your caching strategy change so obviously or so dramatically. The default growth factor of slab-classes is 1.25, so you'd have slabs with 1KB slots, 1.25KB slots, 1.5KB slots, etc. The the concept holds -- if you are heavily using certain sized slabs and that pattern shifts (sql queries return more objects? web pages get bigger? add a column to a table which moves a cached response up a slab class? etc.) Then you can end up with a bunch of slabs which are the "wrong" size and you can have "nowhere" to store something despite having gigabytes of "unused" space.
If you are getting evictions, it's possible to telnet into memcache and find out what slabs are causing the evictions. Usually, a cache-reset (yeah, empty everything) fixes the issue. Here's a reference on how to get at the stats.

NewUserInternals - memcached - No Guts No Glory - Memcached - Google Project Hosting

NewUserInternals - memcached - No Guts No Glory - Memcached - Google Project Hosting:

It is important that developers using memcached understand a little bit about how it works internally. While it can be a waste to overfocus on the bits and bytes, as your experience grows understanding the underlying bits become invaluable.
Understanding memory allocation and evictions, and this particular type of LRU is most of what you need to know.

How Memory Gets Allocated For Items

Memory assigned via the -m commandline argument to memcached is reserved for item data storage. The primary storage is broken up (by default) into 1 megabyte pages. Each page is then assigned into slab classes as necessary, then cut into chunks of a specific size for thatslab class.
Once a page is assigned to a class, it is never moved. If your access patterns end up putting 80% of your pages in class 3, there will be less memory available for class 4. The best way to think about this is that memcached is actually many smaller individaul caches. Each class has its own set of statistical counters, and its own LRU.
Classes, sizes, and chunks are shown best by starting up memcached with -vv:
$ ./memcached -vv
slab class   1: chunk size        80 perslab   13107
slab class   2: chunk size       104 perslab   10082
slab class   3: chunk size       136 perslab    7710
slab class   4: chunk size       176 perslab    5957
slab class   5: chunk size       224 perslab    4681
slab class   6: chunk size       280 perslab    3744
slab class   7: chunk size       352 perslab    2978
slab class   8: chunk size       440 perslab    2383
slab class   9: chunk size       552 perslab    1899
slab class  10: chunk size       696 perslab    1506
In slab class 1, each chunk is 80 bytes, and each page can then contain 13,107 chunks (or items). This continues all the way up to 1 megabyte.
When storing items, they are pushed into the slab class of the nearest fit. If your key + misc data + value is 50 bytes total, it will go into class 1, with an overhead loss of 30 bytes. If your data is 90 bytes total, it will go into class2, with an overhead of 14 bytes.
You can adjust the slab classes with -f and inspect them in various ways, but those're more advanced topics for when you need them. It's best to be aware of the basics because they can bite you.

What Other Memory Is Used

Memcached uses chunks of memory for other functions as well. There is overhead in the hash table it uses to look up your items through. Each connection uses a few small buffers as well. This shouldn't add up to more than a few % extra memory over your specified -m limit, but keep in mind that it's there.

When Memory Is Reclaimed

Memory for an item is not actively reclaimed. If you store an item and it expires, it sits in the LRU cache at its position until it falls to the end and is reused.
However, if you fetch an expired item, memcached will find hte item, notice that it's expired, and free its memory. This gives you the common case of normal cache churn reusing its own memory.
Items can also be evicted to make way for new items that need to be stored, or expired items are discvered and their memory reused.

How Much Memory Will an Item Use

An item will use space for the full length of its key, the internal datastructure for an item, and the length of the data.
You can discover how large an Item is by compiling memcached on your system, then running the "./sizes" utility which is built. On a 32bit system this may look like 32 bytes for items without CAS (server started with -C), and 40 bytes for items with CAS. 64bit systems will be a bit higher due to needing larger pointers. However you gain a lot more flexibility with the ability to put tons of ram into a 64bit box :)
$ ./sizes Slab Stats      56
Thread stats    176
Global stats    108
Settings        88
Item (no cas)   32
Item (cas)      40
Libevent thread 96
Connection      320
libevent thread cumulative      11472
Thread stats cumulative         11376

When Are Items Evicted

Items are evicted if they have not expired (an expiration time of 0 or some time in the future), the slab class is completely out of free chunks, and there are no free pages to assign to a slab class.

How the LRU Decides What to Evict

Memory is also reclaimed when it's time to store a new item. If there are no free chunks, and no free pages in the appropriate slab class, memcached will look at the end of the LRU for an item to "reclaim". It will search the last few items in the tail for one which has already been expired, and is thus free for reuse. If it cannot find an expired item however, it will "evict" one which has not yet expired. This is then noted in several statistical counters.

libevent + Socket Scalability

Memcached uses libevent for scalable sockets, allowing it to easily handle tens of thousands of connections. Each worker thread on memcached runs its own event loop and handles its own clients. They share the cache via some centralized locks, and spread out protocol processing.
This scales very well. Some issues may be seen with extremely high loads (200,00+ operations per second), but if you hit any limits please let us know, as they're usually solvable :)