Monday, May 6, 2013

BYOD: Build Your Own Device or “ARM’d and Dangerous”




I’m on a plane now, headed to EMC World, and feeling just a bit disconnected as the airline I’ve chosen doesn’t offer WiFi. Time to catch up on unread email and write a blog post – things that haven’t been able to make the priority list when I’m fully plugged in. I pull out my bulky laptop, and immediately wished I didn’t have to lug around what has become an unfortunate boat anchor that precludes juggling both the Dunkin Donuts coffee and the Strawberry Frosted (with Sprinkles) Donut at the same time.

A common complaint of any Road Warrior, certainly – but only a few weeks ago I would have marveled at how light and easy to carry my 11” MacBook Air was, and how wonderful it was not to have to carry a bulky laptop bag as well as a carry-on. This was before I discovered and implemented the most flexible and lightweight technology solution commercially available for $30 USD – my new desktop computer.

Ah, the desktop – I had forgotten what it was like to leave the house without a computer, unencumbered and fancy-free. Even vacations these days, I’m sorry to say, at least involve throwing a tablet into the bag, if not the laptop as well. But there’s mercifully no need to charge a desktop, no need to turn it off or allow it to hibernate (other than proper attention to our Corporate Sustainability Guidelines, of course). Constantly connected at mind-numbing 100mbs speeds to the corporate network, no WiFi drop-out, no RSA token entry to connect to WiFi or VPN, no hassles at all!

I still usually bring my laptop to work, even if not travelling, just for those times while in a conference room or out to a local client site when I either need to present something or need a work interface more flexible for content creation than a smartphone. But not always. And lately, when I do bring it in, I’ve been leaving it locked up in the car instead of bringing it in just to sit unused on my desk next to my desktop.

What did this amazingly liberating technology cost, you ask? Was that $30 USD a typo? Did your employer provide both a desktop and a laptop?

My $30 desktop is a Raspberry Pi. A marvel of innovation, designed originally by folks affiliated with the University of Cambridge to be an educational computer – low cost, easy to deploy, sort of like Negroponte’s $100 laptop, except it still needs a display and a keyboard/mouse. It’s an uncovered circuit board with an ARM processor, not much larger than the size of a credit card, with USB and HDMI ports – enough for networking, a wireless keyboard/mouse fob and a cable to attach my old 19” “docking station” monitor. Power is through a microusb port - it runs off of my Android phone charger. It sits on my desk and asks for nothing, allowing me to do everything my old boat anchor can do, only without the tote bag.

I have found I am not alone in using the Pi for something other than education – it’s a hobbyist’s dream. Hook a 4” or 7” screen up to it, wire it into your car electrical system (or make a simple AA battery pack) and you have a car video system. XBMC, my favorite HTPC package, has already been compiled for the Raspberry Pi. Use it to drive Lego robots, control various instrument sensors – the possibilities are endless and I read of new ideas for using the Pi daily. My dream was more mundane, I wanted to see if I could use the Pi as a full-blown work desktop.

It took a while to receive the Pi- it was backordered from Allied Electronics (http://www.alliedelec.com/lp/120626raso/ ) for 6 weeks when I ordered it), I was as giddy as a kid at Christmas. It was tiny, even smaller than I expected it to be from pictures. I downloaded the Debian Wheezy distro and imaged it onto a SD card (the hard drive of the Raspberry Pi) right away. I connected to the Raspberry Pi store and prepared to download all of the friendly Linux packages I had used on my last laptop (pre MacBook boat-anchor) – and found out right away that this was going to be a little more involved than the current easy Ubuntu or Linux Mint distros available for Intel/AMD. Libre Office was there, but not a whole lot more. How about VMware View, Hadoop, Syncplicity, Zimbra, all the important stuff?

It soon became like some weird dream out of the late 90s, compiling source code from FTP sites for BSD web servers. But also, a little bit nostalgic and fun.

Hadoop was the easiest – thanks to Tom’s excellent guide to installing at https://fullshovel.wordpress.com/2012/07/ and the fact that, since it’s java based, It didn’t need to be ported to ARM. The job tracker process, however, was a bit too heavy a load for the Pi to be used as both a desktop and a single node cluster at the same time – I use one or the other (this may be a clear call for more Pi).

VMware View was a bit tougher – for some reason, while I can use a View client from my Android phone, tablet, iPad or pretty much anything else, VMware has not yet released a Raspberry Pi client. Thank goodness for open source – no VDI access would have put a serious crimp in my desktop dreams. I was able to compile the VMware View Open Client, but couldn’t get it to work on the Pi – at least not until I found an old bug report on a known issue (https://code.google.com/p/vmware-view-open-client/issues/detail?id=103). Patched a few headers, and I was good to go!

Installed Chromium with Gash and VLC with minor hacking, then Pidgin and the Pidgin plugin to support Microsoft Office Communicator. Pidgin, unfortunately, wouldn’t work – found another bug, this one related to certificate handling, and was able to get it to work using a simple startup script “NSS_SSL_CBC_RANDOM_IV=0 pidgin”. Now my Raspberry Pi broadcasts my presence info as “Eating a Fried Cheeseburger” 24x7.

I can see a whole new wave of Build Your Own Device taking over corporate IT. Small enough to fit in a file cabinet or drawer, needing only a screen of some kind and a keyboard, who needs more technology than a simple ARM processor with less horsepower than a phone?








Monday, January 14, 2013

The Tide of STEM

The American Association for the Advancement of Science's Annual Meeting is coming up soon, and it prompts me to speak to an ongoing challenge my peers and I face, as well as our common passion for what we see as the path to resolution. The challenge is the lack of qualified workers for technical positions, one of the few areas in the economy with significant growth. While there are multiple paths to address this challenge, including good old fashioned employer training (which I believe every firm can always improve on), technical training for US Veterans, updating our immigration laws or simply moving work to where there are available resources, most of the industry would agree that increasing focus on STEM (Science, Technology, Engineering and Mathematics) education is a critical element in our future success.

At the past Data Science Summit, the demand for Data Scientists and the shortage of those who fit Steven Hillion's description of "equal parts engineer, statistician and investigative journalist / forensic reporter" was constantly part of the discussion. James Kobielus speaks of how the term "Data Scientist" itself has a long lineage, and ultimately the skills of a Data Scientist are built on the fundamental principles of the Scientific Method. Tomorrow's Data Scientists are the STEM students of today, those learning the Scientific Method as well as foundational mathematical skills, and starting to develop domain expertise in key scientific areas.


Industry, Education, and Legislative leaders are working today to help drive greater consumption of STEM education among the youth of today. Thought leaders like EMC's Howard Elias are advocating for leveraging technology to "flip the classroom" to transform STEM education. EMC's Academic Alliance has already started to tackle the challenge, including developing Data Science curriculum, and has already reached over 150,000 students across the 1000+ Universities in over 60 countries that form the Academic Alliance.

Multiple organizations, including NACME and the STEM Education Coalition, are working to help prioritize and drive STEM education initiatives. Still, no one would argue that success has been fully attained, and most feel that much more effort and focus is needed.


My own experience with STEM education was driven by excellent educators. Initiatives to increase the number of STEM teachers, such as the AFCEA Educational Foundation's scholarship program are targeted at providing this experience to more children. I know that I would have been much less likely to have entered a technical field, or to even take the same delight in technical curiosity, were it not for many excellent STEM teachers growing up. To this day, I still note with enjoyment the occurrence of a fairly non-traditional holiday, the birthday of Robert Bunsen at the end of March. In high school, this birthday was an occasion for our chemistry teacher to lead a field trip to his farm, where we tapped trees for maple syrup. Sugar being a powerful incentive, we barely noticed that we were learning about evaporation, filtration, reverse-osmosis and other things that would never have seemed quite as compelling coming out of a textbook. I truly hope that we can find some way to similarly make STEM education sweetly successful in the future.


Tuesday, October 16, 2012

Networking from A to PI


Often with technology it is difficult to quickly identify the root cause of a problem. Complex systems present multiple possible points of failure, especially when they have no single point of failure - a strong argument for the KISS principle.

In enterprise IT environments, this challenge is compounded by the complexity of the IT organization. Multiple silos of expertise and responsibility lend themselves to a problem triage approach whereby each constituent group in the IT organization separately goes through an internal assessment of a technical issue. Frequently this results in all groups saying "it's not me" when asked to identify the source of a problem - and no immediate isolation of root cause or consensus definition of a likely solution.

I like to describe this challenge with an old joke:
There's a problem with the performance of a mission critical application. A huge number of resources assemble - server and storage administrators, application developers, DBAs, QA and Performance Test teams, business analysts, virtualization administrators, line of business executives, program and project managers. An auditorium is used as the "war room", and this huge group rapidly identifies the likely cause of the issue. It's the networking team - they weren't in the room.

Whatever humor that joke has stems from two things - human nature and an unspoken IT cultural divide that places networking outside of other technology disciplines. Networking has turned into an essential, "always on" service that has different characteristics from other IT disciplines and service offerings. It is a unique culture with strong barriers to entry. As implied in the phrase "ping, power and pipe" It's the air we breathe, it's the dial tone on our phone...

Dial tone is not as good an analogy as it used to be, and old conceptions about networking are being challenged as well. Software Defined Networking (SDN) is headed rapidly towards the peak of its hype cycle (or is well past it, depending on who you are talking to). It promises to deliver the same agility, CapEx and OpEx reductions that companies have attained with servers leveraging VMware (the Band-Aid of hypervisors).



Also like VMware, SDN allows for abstraction of the underlying hardware components and supports automation and orchestration via APIs. The benefits to this are clear - agile, nondisruptive configuration of network resources with the ability to provision in seconds rather than days or weeks. No more waiting for someone to assign an IP address from a spreadsheet-managed "pool". New ability to tie network management and monitoring together with application and other infrastructure components, presenting a "single pane of glass" for all and an immediate and automated  root cause identification should problems arise. An end to finger pointing - Nirvana!

While the Promised Land has not yet arrived for most enterprises, the allure has been driving significant interest, media coverage and market transformation. VMware's acquisition of Nicira in July 2012 for 1.26B USD is one example of this. Multiple standards still vie for acceptance and the traditional networking vendors are placing their bets.

The end state goals, if not the winning path to those goals, are clear - simplify, automate and orchestrate networking. Move to policy-based management. Find a way to abstract networking infrastructure in the same way that VMware abstracts server infrastructure - as well as storage infrastructure via VAAI and VASA APIs. In doing these things, cut costs and eliminate provisioning delays. Make it easier to dynamically move workloads across geographies and into and out of the cloud. Make networking just another pool of resources that can be managed and monitored with compute and storage through a single dashboard.

Does all of this sound as attainable as world peace? Perhaps it would if we weren't in the midst of rapid cloud computing adoption, or hadn't already seen this happen with server virtualization. The technology, while new to market on the whole,  is already commercially available and rapidly expanding in capabilities.
It's coming.
At some point, perhaps not too far off, a gathering of all critical IT resources to identify the root cause of an issue may become a SWAT team of one cloud administrator.