Sep 162017
 

During the heydays of Linux, Neal Stephenson wrote an essay called “In the beginning, there was command line”, about operating systems. The point was not so much about command line; and he eventually said that he does not stand by that essay, after using OS/X. Still, after many years of using Unix, I resonate with that essay. Even if I use Mac OS/X, I find myself using command line often, for all my needs.

These days command line is becoming fashionable again. Thanks to cloud, and thanks to cloud running mostly on Linux, we have a need for command line tools. Most of the automation and orchestration does not need user interface. If any, they are doing away with user interfaces. Even when they need user interface, it is easy to throw one together with REST APIs and http.

First, a little bit of motivation for those that grew up using Windows. Yes, windows make it easy to use a system, with help and hints along with invocation of a system. However, as you are pulling together different systems, filling in the gaps between systems, you find command line will help you become more productive. You are not limited by the tools vendors offer you; you are only limited by your imagination and ability.

Particularly if you are working in cloud, you find yourself using command line often. You are logged into a remote machine – and you need to download or install new app. You need to edit files, transmit files, and change parameters. You need to do all this, possibly from even your cell phone. It can get very frustrating to look for applications to suit your workflows. And, the network settings may not even help you to run arbitrary programs – often they allow only ssh.

If you are going to develop or test applications on the cloud, you have to learn how to use command line effectively. This article is a distillation of the tools that I found useful in working on remote machines. As usual, the following caveats apply: This is not meant as a guide, but a curated list of tools that I found useful. I observed myself at work for a few days, and noted down all the tools I used – naturally, your criteria might vary.  I will only explain what the tool is and how to use it. I will leave the details to a link. Hopefully it is useful for you to become productive.

Logging in: Client

Most Linux systems on the cloud allow logging in only via ssh. If you are in OSX or Linux machine, you will not need to look for a client. On OSX, I would recommend you use iTerm2 instead of usual terminal application. On windows, you have two choices: puTTy and MobaXTerm (home edition). You can use other systems such as cygwin, but they are too heavy for our needs. All we need is a window into the remote machine.

mobaxterm

I prefer MobaXTerm for the following reasons:

  1. Integrated sftp: Copying files back and forth from your desktop and the server is easy.
  2. Credential management: It remembers passwords, if you allow it, for easy login. A better method is to use your PKI infrastructure.
  3. Support for X: If you like, you can even tunnel Xwindow programs. Not that I recommend, because you should be able to live with just command line for your needs.
  4. Additional tools: Even if you are using it as a local terminal, it has enough goodies, almost like light weight cygwin.

Here is the way I work:  I start mobaxterm. I do ssh into the box of my choice. If I need to cut and paste a small amount of text, I would use the windows cut-n-paste mechanism. If I need to send large amount of text or data, or even downloadable files, I would use the built-in sftp mechanism to send the files back and forth.

Logging in: ssh

While mobaxterm can remember the password and enter on your behalf, it is not enough for you. Ssh became a de facto mechanism for lot of communications. For instance, if you are using git, you use ssh to login. If you are using scp, it uses ssh underneath. If you want to automate such commands, you cannot expose the password to the scripts. Instead, you would use password-less ssh, with public key/private key infrastructure. Also, you may have to login to multiple servers from your cloud gateway machine where you logged into.

All you need to do is:

  1. Generate ssh keys using ssh-keygen. You can do it anywhere and transfer the files. I usually work on many machines. I have my own generated public and private keys that I take it from machine to machine.
  2. Copy the id_rsa.pub to the remote machine into the authorized_keys in .ssh folder.
  3. When you login first time, you need to accept the machine id. After that, you can login without the password.

Debug hints: Try ssh –v or ssh –vvv to get an understanding of what is going on. If you are running into troubles, most likely it is because of the permissions issues. Make sure id_rsa file is readable only by you. And, id_rsa.pub is readable by others.

ssh-vv-135846

Additional hint: If you are inactive, the server logs you out after a period of time. If you like, you can configure your ssh to send a packet periodically to keep the connection alive. For that add the following lines in .ssh/config:

Host *
	ServerAliveInterval 30

Interactivity: Tmux

There are several challenges in working remotely on ssh using a window. What if you want to temporarily run a command as a root? You need to open another window. What if you want to go home and work on the remote machine? Your connection will be lost and you need to re-login again from home. And, all your context is lost.

Tmux (and an older alternative, screen) helps you with these kind of problems. There are nice user manuals, guides, cheat sheets for you to learn from. I will tell you how I use it:

  1. As soon as I login, I start a tmux session. If I already had one, I will attach to that session.
  2. In a session, I have at least four login sessions running.
    1. One is for working with command line as a user.
    2. Two is running as root – useful for installation or killing the processes.
    3. Three is to watch for essential processes
    4. Four is my editor to edit the files locally.
  3. I use C-b 0/1/2/3 to go to the window of my choice to work on it and come back.
  4. When I am done, I detach from the session.
  5. When I use a different machine, I just attach to this session.
  6. Sometimes, I run it from a shared account – and then, two of us can login to the same session and see debug any code we run.

tmux-135853

Just a shout out for watch: Suppose you are planning on running a command in a loop and watch its output. Watch runs it in a terminal and repaints only the changed characters. That clearly shows what part of output changed and what did not. For instance, you can run “watch –n 1 ls –l” runs the command “ls –l” every second. You will see clearly if there is a new file that appeared. It is more useful for checking the status, or the load of the system. Very useful for checking the load on my GPU machine, using “watch –n 1 nvidia-smi”.

Downloading: wget, curl, elinks

The first thing you will need often is to install a package. These two programs help download the any software or application that you want. Out of these, I prefer wget.

Lot of times, you may have to login or accept a EULA before downloading the software. In that case, you have two choices: Download the software onto your machine and use scp to copy the file. Of course, if your network may not be good enough to download.  Or, use text based browser like elinks to download the software. Of course, modern JS based websites do not work well with elinks. In that case, if you used MobaXTerm, you may be able to run X-windows application such as firefox and have it pop up on your terminal.

Rsync: Swiss army knife for moving files between machine

Let me give the scenario. Suppose you are editing files on your local machine. You want to move them to a remote machine. How? You can use scp or sftp. Suppose you changed a few of the files in a folder. How do you move them? It is complicated to remember the path and move to the right path. Or, if you changed one line in a large file, you still have to copy large file.

The answer to this and many other questions is rsync. It can work over ssh, which means, if you can login, you can move the files. It moves only diffs, which means you get to move only the differences.

I have a system to publish a website. I edit the files locally. I wrote shell scripts that rsyncs to remote location and that shows up in the website.  For instance consider the following one liner:

rsync -avz -e ssh --delete ./local/ root@machine.com:$remote

It uses ssh (-e option), –z (compression), –v (verbose), and –a (archive mode). The “-a” does the trick. It syncs 1) recursively, 2) copies symlinks, 3) preserves owner, 4) preserve modification times, and 5) preserves permissions. What it means is when it copies the folders, it does the right thing. The –delete option removes extraneous files from remote location. That is, if you delete a file local directory, it will remove it from remote too. Without it, it only updates the files. The trailing slack in the local folder means, copy only the contents. Without the trailing slash, it will copy the folder, including the contents. While it looks magical, it only touches the files that modified, and only transmits the deltas, and also compresses.

Example use case: What if we want to update the remote folder every time I write to a local folder?

What we need is to monitor a file or folder for change. There are several ways we can do it.

  • You can use inotify tools for any change monitoring. The link provided gives you an idea on how to use it for continuous backups.
  • My favorite is to use node.js – there is a module called chokidar that does what we need and more. You can invoke any process on any change.
  • Python has a module called watchdog, which does something similar.

All we need to do is to use any of these programs and keep in a loop for changes. When change occurs, do the remote push and then go back to the loop. The rest is left as an exercise.

Find: Swiss army knife to work on many files

For the life of me, I can never remember the syntax for find. Still, I find myself using this utility all the time. It is useful tool while dealing with lot of files at one go. Let me illustrate with some examples:

# To print all the files
find .

# To print all the files with .js extension
find . | grep .js$

# To print only the directory files
find . -type d

# To print only the non-directory files
find . -type f

# To print all the files with a specific string, say "TODO:"
find . -type f | xargs grep "TODO"

# Notice that xargs repeats the function on all the arguments.

# To print all the files that changed in the last 2 days
find . -mtime 2

# You can combine all the capabilities in many ways
# For example, all the java files touched in the last week,
# with a case insensitive TODO
find . -type f -mtime 7 | xargs grep -i TODO

Admittedly, there are better ways to use ‘find’, but I do not want to remember its idiosyncratic options – these ones are the most useful, and most common in my experience.

In addition to the find, there are other unix utils that I find myself using a lot. If you know these, you can use the command line effectively, only reaching for higher scripting languages when really needed. Simple bash, with these utilities can do all that you want and more. Moreover, all these are available in any standard Linux installation. For instance, using cut, paste, join, you can selectively choose and merge parts of the file. You can sort files based on numbers. You can slide and dice files with it.

Running a website: Python, live-server

You have a static website that you want to show to a colleague. Or, you have a set of files that you want to serve up for an application or for colleagues. You don’t have to go all the way, installing and configuring Apache. Instead, the following one-liner can do it:

# python2

python -m SimpleHTTPServer

# python3

python3 -m http.server

# Standard port is 8000. 
# You can change it by specifying the port as an argument.

If you do not have Python installed, you could try node.js based http servers. The two servers that I like are: http-server and live-server. The beauty of live-server is that the browser auto loads if the file it is showing changes.

# To install and use http-server
# Default port is 8080
npm install -g http-server
http-server

# To install and use live-server
npm install -g live-server
live-server


Live-server is particularly useful when we develop static websites. As we edit it, and push it to the remote site, the browser automatically get updated.

Running a website from behind a firewall

Sometimes, your machine may not be accessible from outside. It may be behind a firewall – you may have had to go through a series of ssh tunnels to get to that machine. Or, your machine may be in the corporate VPN.

Fear not. As long as you can access a public website, you can show your website to the outer world. You have two options: http://localtunnel.me – you need to have nodejs installed on your machine.

# Install localtunnel
npm install -g localtunnel

# Start the webserver
live-server --port=8000

# Run the command to tunnel through
lt --port 8000

# The system will give a URL for you to access it from outside.
# Example: https://nuwjsywevx.localtunnel.me/
# Notice that it is publcly resolved URL
# And, it comes with https!

You have other options: You can use http://ngrok.com (There is free plan). You can tinker with your ssh to do reverse tunneling. I am going to give the full details here, but if you use “shell in a box”, you can even access this machine from outside the firewall, via secure shell. Naturally, please comply with corporate security guidelines.

Creating and publishing content: markdown tools

For programmers, creating documentation is important. They need to record what they do. They need to write design documentation. They need to write user manuals.

Writing in word or any other WYSIWYG tool is pointless – it does not work well over the command line. Suppose you are on a remote machine – and, you need to refer to installation manual. If it is in .docx format, how are you supposed to read it from the command line?

Of course, you could write your documentation in text file. It is ubiquitous; you can create and read it from many tools. Yet, you will not get any good looking and useful documentation out of it. You cannot print the documentation. You cannot have images. You cannot use hyperlinks. All in all, your experience is only limited to the command line. It cannot be repurposed.

A midway is markdown. It is such a simple format (built on text) that it is easily readable, even if you do not do any conversions. It can easily be created on any tool (like any text editor, though I use atom or Visual studio code. Of course, you can use even notepad.

The rules are simple: See https://github.com/adam-p/markdown-here/wiki/Markdown-Cheatsheet  — you can learn them in 10 minutes. If you use atom or code, you can even see preview as you edit.

Once you author the document in markdown format, you can convert to any format you wish. There are several tools. My choice tool is pandoc. It can convert markdown into many formats, including docx (for word), or text, or html.

screenshot

You can do even more with markdown. I use markdown with minimal yaml settings to generate websites – I use mkdocs for its simplicity.  If you have a bunch of markdown files, you can generate a static website, with themes and searching capability, using mkdocs and host it, using the technics we discussed earlier. It is always handy for a developer to create a simple website on that favorite library or design that they are working on.

Working together: Expect, tmux

Suppose you are a remote machine – and you are experiencing some trouble – some command line application is not working. You want to show it to an experienced colleague. Or, you just want to train a junior colleague. Here are the two ways:

  1. Suppose you are sharing the account (it is common with testers etc to create a common account and use it). In that case, you start a tmux session like we described earlier. Let the other colleague also login to the same session. Now you can easily see what the other is seeing. Of course, you want to keep the terminal size the same. Otherwise, it will look weird.
  2. If you have a different account, you still can share your session with others. What you use is a program called “kibitz”, which is a part of “expect” system. Assume you want help from user <abc>. Both of you should be logged in at the same time. All you need to do is kibitz <abc> and then <abc> needs to type the command that shows up on their terminal. From then on, they can see and type on your terminal along with you.

Epilogue

I hope you will find these commands useful. Using this knowledge, you can edit files locally, push them to remote machine, and share them on the web with a client.  You can start working at the office and continue to work at home on the same setup. You can keep machines in sync. You can create developer friendly websites and share it with colleagues from behind the firewalls.

Finally, there is saying: In windows, you are limited by the tools you have; in Unix, you are limited by the knowledge you have. All the tools are there for you to pick and use, combining in many different ways. The more time you spend, the more proficient you become. Go forth and learn!

 Posted by at 10:41 pm  Tagged with:
Dec 312014
 

Because of Facebook, I have been in constant touch with friends, acquaintances and even people that I did not meet. I am using this annual letter as a way of summarizing, introspecting, and filling in the gaps in my usual communications about technologies to friends. It is heavily slanted towards technology, not the usual intersection of business and technology.

There are three ways that I learn about technologies. One is by experimenting on my own. By actually coding, practicing, verifying hunches, validating ideas, and playing, I learn a bit. By talking to customers, sales people, engineering managers, and developers, I understand what the problems of application of technologies are. By reading books, news papers, and blogs, I try to see the interrelationships between different subjects and the technology influences in the modern world. Let me review from the annual perspective, what these three different influences taught me.

(Cloud) Container based technologies

I played quite a bit with container based technologies. Specifically, I tried various docker based tools. In fact, I setup systems on Digital Ocean that lets me create a new website, make modifications, and push to public, in less than 10 minutes. That is, using the scripts that I cobbled together (most of them are public domain, but I had to make some tweaks to suit my workflow), I can create a new docker instance, initialize with the right stack, provision reverse proxy, and push the content to that machine, install all dependencies, and start running the system.

 image

Minimal, cloud-ready OS’s

From my experiments with docker, I see that the power of virtual machine that can run any OS is not much useful. I don’t want to think in terms of OS. What I want is a stack to program in. My OS should be cloud-aware OS. It should be minimal, should work well in a hive, should support orchestration, and should be invisible. Since I am unlikely to interactively work in it, I would place a premium on programmability, and support for REST services. Of course, it needs to secure, but since it is minimal OS, I want security at different layers.

image

Based on all these requirements, I see hope for CoreOS kind of OS. I don’t know if coreos is it, but something like that — a minimal, cloud ready OS is going to succeed. Companies like Google and Facebook already use such systems for their internet scale applications.

(Server side technologies) Node.js technologies

I entered this year having learnt a bit about node.js. I have a love-hate relationship (who doesn’t?) with JavaScript. On one hand, I love its ability to treat functions as first class objects, its polymorphism, its ability to extend objects etc. Its approach to type system is dynamic, flexible, and incredibly powerful.

Yet, I find lot of things to hate about JS. It scoping is infuriating. Its support for basics of large scale programming are absent. Its ability to type check are minimal. It leaves our crucial features of programming, letting users create competing idioms.

For small to medium systems that are cobbled together by REST services, node.js still is a quick way of getting things done. And, I like npm — it is even better than CPAN. I am not a fan of reinventing the wheel with all the tools like gulp, bower etc. The proliferation of these tools is confusing, putting off a casual user with their own standard tools. (Java was the original culprit in these matters.)

image

In Node.js, the technologies I played with are:

  • Express: Of course, duh. There may be better ones there, but I needed one standard one in my arsenal. This one will do.
  • Mongo: The power of using JavaScript across all the layers is seductive. I don’t have to translate from one format to another. And, let somebody worry about performance (well, not actually, but will do for the most part).
  • Usual set of libraries, involving, parsing, slicing and dicing, and template management.

Front end JavaScript

I have been frustrated with the front end technologies. There are too many frameworks. MVC? MVCC? And, the complex edifice makes my head swim. At the end, I am not able to make sense of it all. Thankfully, I see things changing. I am able to settle down on a few choices for myself — not because they are the best (in some cases, they are), but they are good for a generalist like me.

JQuery: I still find it the best way to manage a DOM from JS. I never bothered to learn the full DOM API, and I find JQuery convenient way of managing.

Web components: Specifically, I fell in love with Polymer. Why? Think about this. HTML has a standard set of tags. Suppose you want to introduce a new tag. You need to create JavaScript that parses the new tag and manages it for you. So, your code is strewn in a few places: the JavaScript code, CSS specifications, and the HTML. It is too messy, too unmaintainable, and more importantly, difficult to mix different styles.

Enter web components. You can create new elements as first class entries in DOM. The looks of the new element are specified via CSS in there itself. The behavior through the JavaScript also goes there. You expose specific properties and interactivity. The big thing is since it is first class element in DOM (as opposed to translated to standard elements through JS), you are able to reference it from JQuery and manage it just like you would a heading.

image

Since not many browsers implemented web components, we need a Polyfill, a way of mimicking the behavior. Thanks to Polymer, now we have JavaScript code that makes it appear that the browser supports this DOM behavior of web components. This polyfill intercept every call to DOM and translates appropriately.

Summary: It is slow and buggy at the moment. In time, it will take off, creating a nice 3rd party market for web components. It almost like what COM did for Microsoft.

Assortment of libs: While I did not reach my ideal setup (where the machine can speak IKWYM – “I know what you mean: language), there are several libs that help me now. Specifically, I like the templates with inheritance like nunjucks. I also use Lodash to make life simpler. And, async.js to use the async paradigm of JavaScript.

HTML looks and feel

As an early adapter of Bootstrap, I stand vindicated in pushing it for my corporate users. Now a days, almost all development we do is responsive, built on standard template driven development. Within that, I dabbled with a few design trends because I found them interesting:

  • Parallax Effect: You see pages where the background images roll slower than the text? It gives a 3D layering effect. It is particularly effective in creating narrative stories. When the images are also responsive, this 3D effect can make the web pages come alive. To take a look at some examples, see: http://www.creativebloq.com/web-design/parallax-scrolling-1131762

  • Interactive HTML pages: Imagine you are telling a story, where by changing some choices, the story changes. For examples, tabs change the content. But, imagine creating new content based on the user input, not merely showing and displaying the content. For instance, once we know the name of the reader, age and other details, it is easy to change the text to incorporate those elements. Or, if we know what they need, we can directly address in a sales dialog. While I did not carry out this idea to completion, I satisfied myself which the technology and the framework to do this interactive narrative sales tool. Naturally, this framework has a little bit of JS magic.

  • Auto generation of web pages: As I was writing text, converting the text to HTML and a suitable web page became an obsession with me. I finally settled down to using combination of md5, bootstrap, and yaml properties to generate a good looking web page to share with people.

If you are interested, please see these two blog posts, from yester years:

  1. http://www.kanneganti.com/technical/engineers-guide-to-understanding-web-design/
  2. http://www.kanneganti.com/technical/html5-design-knowledge-for-application-developers/

Static web apps

As I started seeing the advances in the web front ends, I see the possibilities of static web site. For instance, it is easy to create and run a static e-commerce application, with 10K or so SKU’s without any trouble. We can even have recommendation engine, shopping cart, various payment methods — all these thanks to web services and HTML5.

image

The following are the technologies that I found useful in the static websites.

  • markdown for html generation: For generic content, markdown is easy to author format. In fact, we can even author books in this format.
  • asciidoc for book authoring: For larger format HTML with more whizbangs, asciidoc is what I tried.
  • docpad for static website generation
  • pandoc for format conversions

For the in-page manipulation of large number of objects, I find the following very useful:

  • pourover: The amazing slice and dice web sites for displaying large tables is done by pourover library from NY Times. I have high hopes for it. I think there are lot of innovative uses for this library, with its performance, and ability to cache the results.

One of my favorite problems is to develop a web applications, without any db in the backend, a pure static application that acts as a library interface. We can search, slice and dice the selection using various criteria. For instance, latest document about a particular technology, written by author XXX, related to YYY document.

Mobile & Hybrid application development

I have for a long while, bet on hybrid application development, instead of native application. Now, I concede that on the higher end market, native apps have an edge that is unlikely to be equaled by hybrid applications. Still, in the hands of an average developer, hybrid platforms may be better. They are difficult to get wrong, for simple applications.

image

This year, I was hoping to do native application development, but never came around to it. With polymer being not yet completely ready, I dabbled very little with Angular based framework called Ionic. It was OK for the most part.

Still, for simple pattern based development, I hold lot of hope in Yeoman. For corporate contexts, one can develop various scaffoldings and tools in Yeoman generator framework. That leads to compliant applications that share the standard look and feel without expensive coding.

Languages

In my mind, right now, there are three kinds of languages: ones that run on JVM — that includes scala, Java etc. Ones that translate to Javascript: these include Typescript, Coffeescript etc. And, the rest, like Go etc. Innovation in other languages has slowed down.

a1

Despite that, the three languages I dabbled this year are: Scala for big data applications, specifically for spark; Python, again for data applications, specifically statistical processing, and Javascript, as I mentioned earlier. I liked typescript, especially, since it has support from Visual studio. I started on R, but did not proceed much further.  Another language I played with a bit is Go, in the context of containers and deployments.

Data and databases

This year, I gave a 3 hr lecture in Singapore on bigdata, in the context of internet of things. I should have written it up in a document. The focus of that talk is what are the different areas of interest in big data are and what technologies, companies, and startups are playing in those areas.

image

This holidays, I experimented with Aerospike, a distributed KV database developed by my friend Srini’s company. Whatever little I tried, I loved it. It is easy to use, simple to install, and fast to boot. According to their numbers, it costs only $10 per hour to do 1 million reads per second on google compute platform. I will replicate and see how it compares against other databases like Redis and Cassandra that I am familiar with.

On the statistics front, I familiarized with basics of statistics, which is always handy. I focused on http://www.greenteapress.com/thinkbayes/thinkbayes.pdf to learn more. I followed Quora to learn about the subjects in Coursera. I wanted to get to machine learning, but that will have to wait for 2015.

On particular aspect of big data and analytics that fascinates me visualization. I played with D3 — it was of course the basis of most of the visualization advances that we see these days (http://bost.ocks.org/mike/). I am on the lookout for other toolkits Rickshaw. I will keep following it to see the new upcoming advances to make it more main stream.

Other computing tools

I find myself using the following tools:

  1. Atom for editing text files.
  2. Brackets.io for editing html files, and for live preview of the static web pages.
  3. VMWare workstation for my virtual machine needs.
  4. Markdown for note taking.
  5. Developer tools from Chrome for web development.
  6. Visual studio for web development
  7. Digital Ocean as my playground of coding activities
  8. OVH for any large size applications and servers

Customer conversations

Since most these conversations have some proprietary content, I cannot give full details here. In general, the focus in on innovation and how corporations innovate in the context of established businesses. Typically, it is a mix of technology, processes and organizational structure transformations to change the way businesses are run. I will need to talk about in byte size postings some other time.

Wish you a happy 2015! May your year be full of exciting discovery! See you in 2015!

Mar 272014
 

You are the CIO. Or, the director of application development. You hear about consumerization of IT, in different contexts. You hear about mobile applications developed by teenage kids in only months, and these apps are used by millions of adoring people. And, your business people are demanding why can’t be more like those kids.

What should you do? Retrain your staff on some of the consumer technologies? Get a partner who has the skills in the consumer technologies? Move existing applications to consumer-friendly devices? Are they enough? And, why are you doing all these anyway?

In last couple of years, I have been working with different IT leaders to evolve a definition and an approach to this problem. By training, I am a technology person – the kind that develops the consumer technology. By vocation, I help IT departments help adapt technology to meet their strategy. Being in both the sides of fence, I have a perspective that may be interesting.

This note is derived from a few presentations I made at different industry conferences. The self-explanatory slide deck is available at: http://www.slideshare.net/kramarao/consumerization-32279273

Let’s look at the following three questions, in order:

  1. What is consumerization of IT?
  2. How does it affect IT departments?
  3. What should the IT departments do about it?

Consumerization of IT: A bit of history

The as coined in 2001, consumerization refers to the trend of building application that are people centric. Have we not been doing that always? Yes and no. While we were developing the applications for people, our main focus was some where else. The focus was about either growth of the business (by managing the volume of data), automation of activities to speed up the processes, or automating the entire business value chains, or only recently, focusing on the customers.

When we were building applications earlier, we were building them for a purpose: to solve a business problem. People were another peace of the puzzle – they were meant to be a part of the solution, but not the purpose of the solution.

Enterprise IT application development

How are these applications developed? Take a look at the sample development process.

image

In the current traditional situation, the EA people map the needs of an enterprise to a technical gap, see if there is a packaged app, and either customize one or build a new one. The biggest questions often boil down to “Build vs. Buy” or, what to buy.

A few things that you will observe are these:

  • Applications take long time to develop: Typically they are large, and serve long term needs of the enterprise. Any other kind of applications are difficult to retrofit into this model of development. For example, if you want an application by marketing department for one-time event, existing processes of IT makes it difficult to offer that service. That is why, we find marketing is one of the prime movers behind consumerization.
  • Applications serve common denominator: They address most common needs of the people. If your needs are very different from others, they will be ignored, unless you are the big boss. No wonder, that IT departments still develop applications with “Best viewed on IE 6+” sticker.
  • Applications lag behind the market needs:  Since the focus is to create the applications with longevity, the design uses tested technologies. The pace at which these technologies are evolving, this design decision makes the technology foundations obsolete by the time applications are delivered. For example, even today, IT departments use Struts in their design – a technology that is already dead.
  • Applications, developed based on consensus needs, lack focus: Since there is a going to be one large monolithic application meeting requirements of several groups with different needs, the applications lack focus. For example, the same application needs to support new users and experienced users. Or, it needs to support management interested in the big picture view and the workers interested doing the processing.  Naturally, any application that is developed to such diverse and divergent needs ends up being unfocused.
  • Applications are expensive to develop: Compared to consumer apps, where we hear apps getting developed for a fraction of cost, the process and the “enterprise quality” requirements impose lot of additional costs on the application development.

That is, this process yields applications that are built to last.  Let us look at how consumer applications are developed.

Consumer application development

Historically, consumer applications have been developed differently.

image

As you can see, in each era, the consumers are different; the focus is different; and the distribution mechanism is different. File it away, as this historic view is important as we look at consumerizing IT. Dwelling deeper into the current era, we see the following:

image

Consumer applications almost always are better focused on end results than the users needs. For example, take the case of Instagram. In its history, it discovered if it followed user needs and demands, it would end up being another FB clone. Instead, it decided to keep the focus on one metric: “How to get most number of photos uploaded by the consumers”. That focused design led to its success.

Consumer applications are also built in collaboration with the consumers. By creating a model of constant experimentation, feedback from the field, and ability to improve the application, without ramifications of user support, the creators of the applications are able to build systems that are “built for change”.

But, what are the disadvantages for the consumer applications, compared to enterprise applications?

  1. Only interesting applications get developed: Go to Apple’s app store, and you find so many applications around weather apps or gaming apps. You do not find enough applications to do genome analysis. Developers are impatient with problems they do not understand, or the problems that require lot of knowledge to solve.
  2. Capabilities may be replicated in many applications: The strength in consumer applications, namely catering to different groups of people, means some core functionality gets repeated in applications. Instead of getting high quality apps, we might end up with lot of apps that are mediocre.
  3. Lack of uniformity in solutions (depends on the platform): While some platforms are famous for creating a uniform and consistent experience, the applications themselves, provide fragmented experience.  Unlike enterprise applications, they lack control or governance.

Consumerization: Why should IT care?

We established that enterprise applications and consumer applications have different focus. We also established that they are built, distributed, and operated differently. Still, why should IT care about consumer applications? Why should it consumerize itself?

I can think of three reasons.

Consumer focus of the businesses

Several service industries like retail, banking, entertainment, music, and health deal with consumers daily. Their business models are being disrupted by startups that bring new technologies and new breed of applications. While IT does not exactly lead the business transformation, at least by bringing the right capabilities, IT can support businesses better.

Internal users as consumers

Demographics of the employees are changing. More and more young people are joining the workforce. They are used to different kind of experience using modern devices and modern applications.

image

Even the older people are used to consumer applications: they use Gmail at home, facetime on their IPad, Facebook on their laptop, and LinkedIn at work. They come to work and they use Exchange without the benefit of Bayesian spam filters; they use Lync video instead of facetime or Hangouts; they do not even have something like Facebook or LinkedIn at work.

By not exploiting the modern technologies and application paradigms, enterprises are risk losing productivity and ability to attract the right talent.

Cheaper and better Consumer technologies

Large investments in the consumer technologies are making them cheaper and better, at a faster pace than the enterprise technologies. For instance, git is improving at a faster pace than perforce. Those companies that took advantage of the cheaper alternatives in consumer technologies, reaped the benefits of cheaper and better infrastructure, application construction, and operations. Google built their data center on commodity boxes. Facebook leverages open source technologies fully for their needs. The following are the main reasons why the consumer technologies are often better choices than the enterprise grade technologies.

image

So, considering that enterprises are being pushed towards consumerization, how should IT react?

Consumerization: an IT perspective

The best course of action for IT is to get the best of the both worlds. On one hand, it cannot run business as usual without its control and governance. On the other hand, it cannot react to markets innovatively without the consumer technologies.

image

As we bring both these best practices together, we see some interesting facts emerge. At least for certain aspects of application domain,  we see that old style of large scale application development does not work.

image

As consumerization increases,  we end up with large number of small applications instead of small number of large applications. Of course, the definition of application it self is subject to change. For our purposes, consider any isolated, independent functionality that people use to be an application. Historically, we used to talk about modularizing the application. Instead, now, we break down large application into smaller pieces. Some of these smaller pieces may have multiple versions to suit the extreme segmentation that consumerization supports.

If we are moving towards such IT landscape, what does it mean to traditional activities? In fact, all the following are impacted by this aspect of consumerization.

  • Development
  • Life cycle plan
  • Deployment
  • Support
  • Governance
  • Enterprise Architecture

Let us look at some of these challenges.

Challenges in consumerization of IT

I see three challenges in consumerizing IT.

image

These costs are easy to rein in, if IT can bring in some of the consumer technologies. In the next section, we will describe each of the technology changes that can help IT address these challenges.

Coping with consumerization: A recipe for IT

There are four questions that we should ask ourselves as we are embarking on consumerization of IT:

image

Each of these questions require an adjustment to the way IT operates. Each of these key concepts need full explanation. Since this article already has grown long, I am going to be brief in describing the key concepts.

Pace layered architecture

The idea behind pace layered architecture is that different parts of the IT move at different speeds. For instance, front end systems move fast as the technology advances faster there. The ERP packages move slow, as they focus on stability. Based on this idea, we can divide IT systems into three groups:

  1. Systems of record
  2. Systems of differentiation
  3. Systems of innovation

If we were to divide systems this way, we know where consumer technologies play a big role: systems of differentiation and innovation. To take advantage of this idea for consumerization, I recommend the following steps:

image

Platform based development

Typically, when we develop applications, we are given just a set of tools that we can use: Java, app server, database etc. Putting together these parts into a working application is left to the developers. At best, standard configurations are offered.

Most of the tasks developers need to do are standard: add security, add user authentication, add help text, add logging, add auditing, and so on. Considering that there are lot standard tasks that developers need to do, is there a way that we can reuse the effort?

We have been reusing different artifacts in several ways. We used libraries, templates, and frameworks. With the advent of cloud technologies, we can even turn into a platform that is ready for cloud. Platforms turn out to be useful in several ways: they standardize development; they reduce the costs; they reduce the time to get the systems ready for development; they improve quality of the systems.

In addition, within any enterprise, there might be standard best practices that can be incorporated into the platform. With these additions, we can enforce governance as a part of the platform based development.

There are industry standard platforms as well for specific purposes: Google platform, Facebook platform, Azure platform, and SFDC platform. Each of them offer different capabilities and can be used for different purposes. Instead of standardizing on one platform, an enterprise will have to classify its needs and plans, categorize its applications, and from that basis, devise multiple platforms for its needs.

Internally, Microsoft has positioned SharePoint services and Office 365 as such a platform. Coupled with .NET technologies, it can be a full platform for delivering user defined applications.

Backend as APIs

The potential of the platform can be fully realized if the enterprise data and functionality is available to the apps developed on the platform. For instance, the data locked in the ERP applications is valuable for many modern applications. Existing logic within the system is difficult to replicate elsewhere and may be needed by the application.

By providing this information, both data and logic alike, as API’s, we can enable internal application as well as external applications. In fact, the current application development paradigms around API based front end development offer several different frameworks for this style of development.

image

Using REST+JSON API’s, we can develop web as well as mobile applications from the same backend.

Modern app stores

Once applications are developed, they need to be operational for the people. There are four different aspects to putting applications to use.

image

There are several different ways such an app store or platform for delivery is handled historically. Popular choices for different ecosystems include, Apple’s app store, Google Play, FB Apps, etc. If we build it right, we do not have to restrict the app store to mobile applications alone. Instead, the same delivery and support mechanism can support mobile and web applications as well.

Concluding Remarks

Consumerization of IT is a desirable trend, if handled correctly. The right way to handle to bring the useful elements of consumerization to appropriate kind of applications. The core features from consumerization include conceptualization of apps via pace layered architecture, development via platforms, integration via API’s, and delivery via app stores.

Mar 042014
 

Every once in a while, I get the urge to work with computers. I want to get my hands dirty, figuratively, and dig into the details of installation, configuration, and execution. This experimentation comes in handy when we discuss the trends in the enterprise. Typically, we neglect processes when we do small scale experiments, but that is matter for another time. Besides, it is really fun to play with new technologies and understand the direction these technologies are heading to.

My personal machine

I wanted to run virtual machines on my system, instead of messing with my own machine. Because I multiplex a lot, I want to have large enough server. That way, I can keep all the VM’s open instead of waiting for the VM’s to come up, when I need them.

Since my basic requirement is to have large amount of memory, I settled on Sabertooth X79 mobo. It can support 64GB, which is good enough to run at least 8 VM’s simultaneously. Someday, I can convert it to my private cloud instance, but till then, I can use it as my desktop machine with lot of personal VM’s running.

image_thumb1024

I have two 27” monitors ordered off ebay, directly from Korea. Each monitor, costing $320, offers 2560×1440 resolution, with stunning IPS display – it is the same as in Samsung Galaxy, but with large 27” diagonal size. These days, you can get them from even newegg.

To support these monitors, you need dual DVI – two of them. They do not support HDMI and VGA would negate all the benefits of such high resolution. The consumer grade reasonable one is built with GeForce GT 640, of which there are several.

Finally, I used pcpartpicker site (http://pcpartpicker.com/p/23MXv ) to put together all my parts and it showed if my build is compatible internally or not. Also, it helped me pick the stores where I can buy them from. I ended up ordering from newegg and Amazon, for most needs. I also had all other needed peripherals like Logitech mouse, webcam, and MS Keyboard etc. from before, which I used for my new computer.

Software

For software, I opted to use Windows 8.1, as I use office apps most of the time. I use ninite.com to install all my apps –  they can install all the needed free apps. Here are some of the apps I installed using that app: Chrome, Firefox, VLC, Java, Windirstat, Glary, Classic Start, Python, Filezilla, Putty, Eclipse, Dropbox, Google Drive.

Since I needed to run VM’s on this machine, I had a choice of VMPlayer or Virtual Box. I opted for VMPlayer.

My cloud machine

While the personal machine is interesting, that was only to free up my existing 32GB machine. The cost of such a machine, with right components is less than $1000. As per software, I had the choice of using ESXi 5.5, Xenserver 6.2, or Microsoft hypervisor 2012 R2. All of them are free, which meets my budget.

I tried ESXi (VMWare VSphere hypervisor), which did not recognize my NIC on my mother board. I tried inserting the driver in the ISO from previous release, but even after recognizing the Realtek 8111 nic, it still did not work. Xensever, on the other hand, worked perfectly well with first try. Since yesterday, I have been playing with Hadoop based Linux versions in this setup.

If you want to try

It is fairly cheap to have your own private setup to experiment. Here is what you can do:

  1. Get yourself a decent quad-core machine with 32 GB. You do not need dvd drive etc. Add couple of 3TB disks (the best is Seagate Barracuda, for the right price). If you can, get a separate NIC (Intel Pro 1000 is preferred, as it is best supported).

    Here is one that is configured: http://pcpartpicker.com/p/34gg0 just to show how low you can go. For around $1K, you can even get one from http://www.avadirect.com/silent-pc-configurator.asp?PRID=27809 as well.

  2. Install Xenserver on the machine. It is nothing but a custom version of Linux, with Xen virtualization. You can login like any Linux machine as well. The basic interface, though, is a curses based interface to manage the network and other resources.

    [Image courtesy: http://www.vmguru.nl/ – mine was 6.2 version and looks the same. From 6.2 version, it is fully open source.]
  3. On your laptop, install Xencenter, which is the client machine for it. Xencenter is full-fledged client, with lot whizbangs.
    SNAGHTML583c865a_thumb24
    It has support to get to console for the machine and other monitoring help. We can use the center to install machines (from a local ISO repo), convert from VMDK to OVF format for importing etc.
  4. It is best to create machines for it, as conversion is a little error prone. I created a custom Centos 6.4, 64bit machine. I used it as my minimal install.
  5. When I installed it, the installation did not allow me to choose a full install. That is, it ended up installing only basic packages. I did the following to get a full install:
    1. The console doesn’t seem to offer the right support for X. So, I wanted to have VNCserver with client running on my Windows box.
    2. I installed all the needed RPM directly from the CD’s, using the following commands:
      1. I added the CDROM as a device for the YUM repo. All I needed were a few edits in the yum.repos.d folder.
      2. I mounted the CDROM on Linux (“mount /dev/xvdd /media/cdrom” : notice that the cdrom device is available as /dev/xvdd).
      3. I installed all the needed software, with one go: “yum –disablerepo=\* –enablerepo=c6-media groupinstall “Desktop” “Desktop Platform” “X Window System” “Fonts””
    3. I enabled network and assigned static IP.
  6. I installed VNC server and customized to open for my display size of 2260×1440.
  7. In the end, I removed the peripherals, and made the server headless, and stuck it in the closet. With wake-on-lan configured, I never need to visit the server physically. 

At the end, you will have a standard machine to play with, a set of minimal installs for me to experiment with on your XenCenter.

What you can do with it

Now, you do not have a full private data center. For instance, you don’t have machines to migrate the VM’s to, setup complex networking among the machines, and connect storage to compute servers. For even with this, you can do the following activities:

  1. Setup a sample Hadoop cluster to experiment: It is easy enough to start with Apache Hadoop distribution itself so that you can understand the nitty gritty details. There are simple tasks to test out the Hadoop clusters.
  2. Setup a performance test center for different NoSQL databases. And, do the performance tests.  Of course, performance measurements under VM’s cannot be trusted as valid, but at least you will gain expertise in the area.
  3. Setup a machine to experiment with docker

At least, I am going to do these tasks for the rest of the week, before I head out on the road.

 Posted by at 2:25 pm  Tagged with:
Feb 232014
 

There is a lot of interest in moving applications to the cloud. Considering that there is no unanimous definition of cloud, most people do not understand the right approach to migrate to the cloud. In addition, the concept of migration itself is complex; what constitutes an application is also not easy to define.

There are different ways to interpret cloud. You could have private or public cloud. You could have just data center for hire or a full-fledged, highly stylized platform. You could have managed servers or instead measure in terms of computing units, without seeing any servers.

As we move applications to any of these different kinds of clouds, you will see different choices in the way we move the applications.

Moving a simple application

Let us consider a simple application.image

The application is straightforward. Two or three machines run different pieces of software and produce a web-based experience to the customers. Now, how does this application translate to the cloud?

As-is to as is moving

Technically, we can move the machines as-is to a new data center, which is what most people do with the cloud. The notable points are:

  1. To move to “cloud” (in this case, just another data center), we may have to virtualize the individual servers. Yes, we can potentially run whatever OS on whatever hardware, but most cloud companies do not agree. So, you are stuck with X64 and possibly, Linux, Windows, and a few other X64 OS’s (FreeBSD, illumos, smartOS and also variants of Linux).
  2. To move to cloud, we may need to setup the network appropriately. Only the web server needs to be exposed, unlike the other two servers. Additionally, all three machines should be in LAN for high bandwidth communication.
  3. While all the machines may have to be virtualized, database machine is something special. Several data bases, Oracle included, do not support virtualization. Sure, they will run fine in VM’s, but the performance may suffer a bit.
  4. In addition, databases have built-in virtualization. They support multiple users, multiple databases, with (limited) guarantees of individual performances. A cloud provider may offer “database as a service” which we are not using now.

In summary, we can move applications as-is to as-is, but we still may have to move to X64 platform. Other than that, there are no major risks associated with this move. The big question is, “what are the benefits of such a move?” The answer is not always clear. It could be a strategic move; it could be justified by the collective move of several other apps. Or, it could be the right time before making the investment commitment to the data center.

Unfortunately, moving applications is not as easy. Consider the slightly more complex version of the same application:

image

Let us say that we are only moving the systems within the dotted lines. How do we do it? We will discuss those complexities later, once we understand how we can enhance the moving that treats cloud like a true cloud.

Migration to use the cloud services

Most cloud providers offer many services beyond infrastructure. Many of these services can be used without regard to the application itself. By incorporating into the processes and also adding new processes to support the cloud can improve the business case to moving to the cloud. For instance, these services include:

image

Changes to these processes and tooling is not specific to one application. However, without changing these processes and ways of working, the cloud will remain yet another data center for the IT.

Migration to support auto scaling, monitoring

If we go one step ahead, by adjusting the non-functional aspects of the applications, we can get more out of the cloud. The advantage of the cloud is the ability to handle the elasticity of the demand. In addition, paying for only what we need is very attractive for most businesses. It is a welcome relief for architects who are asked to capacity planning based on dubious business plans. It is even bigger relief to infrastructure planners who chafe at the vague capacity requirements from architects. It is much bigger relief for the finance people who need to shell out for fudge factor built into capacity by the infrastructure architects.

But, all of that can work well only if we make some adjustments in the application architecture, specifically  the deployment architecture.

How does scaling happen? In vertical scaling, just move to bigger machine. image The problem with this approach is the cost of the machine goes up dramatically as we scale up. Moreover, there is a natural limit to the size of the machine. If you want to have disaster recovery, you need to add one more of the same size. And, with upgrades, failures, and other kind of events, large machines do not work out economically.

Historically, that was not the case. Architects preferred scaling up as it was the easiest option. Investments into hardware went towards scaling up the machines. Still, with new internet companies, they could not scale vertically; the machines weren’t big enough. Once they figured out how to scale horizontally, why not use the most cost effective machines? Besides, a system might require the right combination of storage, memory, and compute capacity. With big machines, it wasn’t possible to tailor to the exact specs.

Thanks to VMs, we could tailor the machine capacity to the exact specs. And with cheaper machines, we could create the right kind of horizontal scaling.

image

However, horizontal scaling is not so easy to achieve. Suppose you are doing a large computation – say, factorization of large number. How do you do it on multiple machines? Or, if you are searching for an optimal path though all the fifty state capitals? Not so easy.

Still, several problems are easy to scale horizontally. For instance, if you are searching for records through large set of files, you could do the searching on multiple machines. Or, if you are serving web pages, different users can be served from different machines.

Considering that most applications are web based apps, they should be easy to scale. In the beginning, scaling was easy. None of the machines shared any state – that is, there is no communication among the machines was required. However, once J2EE marketing machine moved in, these application servers ended up sharing state. There are other benefits, of course. For instance, if a machine goes down, the user can be seamlessly served out of another machine.

Oracle iPlanet Web Server 7.0

(Image courtesy: http://docs.oracle.com/cd/E19146-01/821-1828/gcznx/index.html)

Suppose you introduce a machine or take out a machine. The system should be adjusted so that session replication can continue to happen. What if we run one thousand machines? Would the communication work well enough? In theory it all works, but in practice it is not worth the trouble.

image

Scaling to large number of regular machines works well with stateless protocols, which are quite popular with the web world. If any existing system does not support this kind of architecture, it is not difficult to adjust to this architecture without wholesale surgery on the application.

Most data centers do monitoring well enough. However, in cloud, monitoring is geared towards maintenance of large number of servers; there is a greater automation built in; there is lot more log file driven automation. Most cloud operators provide their own monitoring tools instead of implementing the customer’s choice of monitoring tools. In most cases, by integrating into their tools (for instance, log file integration, events integration), we can reduce the operational costs of the cloud.

Migration to support cloud services

If you have done all that I told you to – virtualize, move to cloud, use auto-scaling, use the monitoring, what is left to implement? Plenty, as it turns out.

Most cloud providers provide lot of common services. Typically, these services operate better on scale. And, they also implement well-defined protocols or needs. For instance, AWS (Amazon Web Services) offers the following:

image

Given this many services, if we just go from machines to machines, we might just use EC2 and EBS. Using these services not only saves money and time, but eventually, ability to use trained engineers and third party tools.

Re-architecting a system using these services is a tough task. In my experience, the following order provides the best bang for the buck.

image

The actual process of taking an existing application and moving it to this kind of infrastructure is something that we will address in another article.

Re-architecting for the cloud

While there may not be the right justification for re-architecting the applications entirely, for some kind of applications, it makes sense to use the platform that the cloud providers offer. For instance, Google compute offers a seductive platform that offers the right kind of application development. Take a case of providing API for product information that your partners are embedding on their site. Since you do not know what kind of promotions your partners are running, you have no way of even guessing how much the traffic is going to be. In fact, you may need to scale really quickly.

If you are using say, Google app engine, you won’t even be aware of the machines or databases. You would use an appengine, and the APIs for big table. Or, if you are using any platforms provided by the vendors (Facebook, SFDC, etc.), you will not think of machines. Your costs will truly scale up or down without actively planning for it.

However, these platforms are suitable for only a certain kind of application patterns. For instance, if you are developing a heavy duty data transformation, a standard appengine is not appropriate.

Creating an application for a specific cloud or platform would require designing the application to make use the platform from the cloud. By also providing standard language runtime, libraries, services, the platform can lower the cost of development, I will describe the standard cloud based architectures and application patterns some other day.

Complexities in moving

Most of the complexities come from the boundaries of applications. You saw how many different ways the application can be migrated if self-defined. Now, what if there are lot of dependencies? Or, communication between applications?

Moving in groups

All things being equal, it is best to move all the applications at once. Yet, for various reasons we move only few apps at a time.

image

If we are migrating applications in groups, we have to worry about the density of dependencies, the communications among the applications.  Broadly speaking, communication between apps can happen the following ways.

Exchange of data via files

Many applications operate on import and export (and transformation jobs in between). Even when we move a set of applications, it is easy enough to do these file based communication. Since file-based communication is typically asynchronous, it is easy to setup for the cloud.

image

Exchange of data via TCP/IP based protocols

In some cases, applications may be communicating via standard network protocols. Two applications may be communicating via XML over HTTP. Or, they could be communicating over standard TCP/IP with other kinds of protocols. X windows applications communicate over TCP/IP with X server. Applications can use old RPC protocols. While these protocols are not common anymore, we might encounter these kind of communications among applications.

image

To allow the communication to continue, we need to setup the firewall to allow such communications. Since we know the IP numbers of end points, specific ports, and specific protocols, we may be able to setup effective firewall rules to allow such communication. Or we can set up VPN between the two different locations.

It is easy to handle the network throughput; in most applications, the throughput requirements are not very high. However, it is very common to have a low latency requirement between applications. In such cases, we can consider dedicated network connection between the on-premise center and the cloud data center. In several ways, it is similar to handling multi-location data centers.

Even with a dedicated line setup, we may not be fully out of woods yet. We may still need to reduce the latency further. In some cases, we can deal with it by caching and other similar techniques. Or, better yet, we can migrate to modern integration patterns such as SOA or message bus using middleware.

Exchange of data via messages or middleware

If we are using middle ware to communicate, the problem becomes simpler. Sure, we still need to communicate between the apps, but all the communications go through the middleware. Moreover, middleware vendors are dealing with integrating applications across continents, across different data centers, and across companies.

image

ESB or any other variants of middleware can handle a lot of integration-related complexities. They can do transformation, caching, store and forward, and security. In fact, some of the modern integration systems are specifically targeted towards integrating with the cloud, or running integration systems in the cloud. Most cloud providers offer their own messaging systems that work not only within their clouds, but also across the Internet.

Database based communication

Now, what if applications communicate via database? For instance, an order processing system and an e-commerce system communicating via database. And, if e-commerce system is on the cloud, how does it communicate with on-the-premise system?

image

DB-to-DB sync has several special tools, since this is a common problem. If the application doesn’t require a real-time integration, it is easy to sync the databases. Real-time or near-real-time integration between databases requires special (and often expensive) tools. A better way is to handle the issue at the application level. That means we should plan for asynchronous integration.

Conclusion

Moving applications to cloud opens up many choices, each choice with its own costs and benefits. If we do not understand the choices and treat every kind of move as equal, we risk not getting the right kind of ROI from moving to cloud. In another post, we will discuss the cloud migration framework and how to create the business case, and also how to understand what application should migrate to which cloud and to what target state.

 Posted by at 8:55 pm  Tagged with:
Jul 282013
 

I used to run 4 vms on my machine for my application testing. Once I discovered containers, I now run close to 40 or 50 on the very same machine. And, as opposed to 5 minutes that take to start a VM, I can start a container under a second. Would you like to know how?

Virtualization: Using virtual machines

I was using computers since 1982 or so. I came to Unix in around 84 and always was using them until 1997. I had to use Windows after that, because Linux did not have the tools needed for regular office work. Still, because I needed Linux, I turned to an early stage company called VMware.

Now, virtualization is a multi-billion dollar industry. Just about most datacenters virtualized their infrastructure: CPU’s, storage, and network. To support the machine level virtualization, technology developers are redefining the application stacks as well.

Let us see how Virtualization looks normally.

image

This kind of virtualization offers lot of advantages:

  • You get your own machine, where you can install your own OS. While this level of indirection is costly, over the time several advances helped:
    •  X86/X64 machine level virtualization: That means, you can run guest OS at native speeds.
    • Addition of software tools such as VMWare tools: These tools proxy the guest OS requests (I/O, network) directly to the host OS.
  • You get root privileges, so you can install whatever you want. And, offer the services just like a physical machine would.
  • Backup, restore, migrate, and other management facilities are easy to manage with standard tools. There are tools to do the same in physical machines as well, but they are expensive, and not so easily doable in self-service model.

But, let us look at the disadvantages also:

  • It is expensive: If you just want to run a single application, it looks ridiculous to run an entire OS. We made tremendous progress in developing multi-user OS’es –- why are we taking a step back towards single user OSes?
  • It is difficult to manage: It may be easier compared to managing a physical machine. But, if we are comparing to running an app, it is lot more complex. Imagine: you not only need to run the application, but the OS also.

To put it differently, let us take each of the advantages and see how they are meaningless in lot of situations:

  • What if we don’t need to run different OS’es? The original reason for running different OS’es was to test apps on different OS’es (Windows came in many different flavors, one for each language, and with different patch levels).
    • Now, client apps run on the web, a uniform platform. So, no need to test web apps on multiple OS’es.
    • Server apps can and do depend not on OS, but a different packages. For instance, an application may run any version of LInux, as long as there is Python 2.7, with some specific packages.

Virtualization: Multi-user operating systems

We have a perfect system to run such applications: Unix. It is a proven multi-user OS, where each user can run their own apps!

image

What is wrong with this picture? What if we want to run services? For example, if we want to run my own mail service? Web service? Without being root, we cannot do that.

While this is a big problem, you can easily solve it. Turns out that most of the services that you may want to run on your machine, mail, web, ftp, etc, can easily be setup as a virtual service, on the same machine. For instance, Apache can be setup to serve many named virtual hosts. If we can setup to provide each other control over their own virtual services, we are all set.

Virtualization: At application level

There are several companies that did exactly that. In the early days of the web, this was how they provided users their own services on the machine. In effect, users were sharing their servers – mail, web, ftp, etc. Even today, most of the static web hosting, or prepackaged web apps run that way. There is even a popular web application webmin (and virtualmin) that can let you manage the virtual services.

image

What is wrong with this picture? For the most part, for fixed needs, for fixed set of services, it works fine. Where it breaks down is the following:

  • No resource limit enforcement: Since we are doing virtualization for each application, we have to depend on the good graces of the application to do the resource limit enforcement. If your neighbor subscribed to lot of mailing lists, your mail response will slow down. If you hog the CPU of the webserver, your neighbors will suffer.
  • Difficulty of billing: Since metering is difficult, we can only do flat billing. The pricing does not depend on the resource consumption.
  • Unsupported apps: If the application you are interested does not support this kind of virtualization, you cannot get the service. Of course, you have a choice of running the application in your own user space, with all the restrictions that come with it (example: No access to some range of ports).
  • Lack of security: I can see what all apps all other users are running! Even if Unix itself is secure, not all apps may be secure. I may be able to peek into temp files, or even into memory of the other apps.

So, is there other option? Can we provide a level of control to the users where they can run their own services?

We can do that if the OS itself can be virtualized. That is, it should provide complete control to the users, without the costs of a VM. Can it be done?

Virtualization: At OS level (Containers)

In early days, they had VPS (Virtual private servers), which did provided a limited bit of control. Over the years, this kind of support from OS has become more sophisticated. In fact, there are several options now that elevate the virtual private servers into containers, a very light weight alternative to VM’s.

image

If you want to see the results of such a setup, please see:

In the beginning there was chroot to create a “jail” for applications so that they do not see outside of that folder and subfolders is a popular technique to create a sandbox for applications. Features like cgroups have been added to Linux kernel to limit, account, and isolate resource usage to process groups. That is, we can designate a sandbox and its sub processes as a process group and manage them in that way. Lot more improvements which we will describe make the full scale virtualization possible.

Now, there are several popular choices for running these kind of sand boxes or containers: Solaris based SmartOS (Zones), FreeBSD Jails, Linux’s LXC, Vserver, and commercial offerings like Virtuozzo and so on. Remember that within that OS, the kernel cannot be changed by the container. It can, however, overwrite any libraries (see later about union file system).

In the Linux based open source world, there are two that are gaining in popularity: Openshift and Docker. It is the latter that I am fascinated with. Ever since dotcloud opensourced it, there were lot of enthusiasm about that project. We are seeing lot of tools, usability enhancements, and special purpose containers.

I am a happy user of docker. Most of what you want to know about docker, can be found at docker.io. I encourage you to play with docker – all you need a Linux machine (even a VM will do).

Technical details: How containers work

Warning: This is somewhat technical and as such, unless you are familiar with the way OS works, you may not find it interesting. Here are the core features of the technology (most of this information is taken from: Paas under the hood, by dotcloud).

Namespaces

Namespaces isolate resources of processes from each other (pid, net, ipc, mnt, and uts).

  • Pid isolation means a processes residing in a namespace do not even see other processes.
  • net namespace means that each container can bind to whatever port it wishes to. That means, port 80 is available to all containers! Of course, to make it accessible from outside, we need to do a little mapping – more later. Naturally, each container can have its own routing table, and iptables configuration.
  • ipc isolation means that processes in a namespace do not even see other processes for IPC. It increases security and privacy.
  • mnt isolation is something like chroot. In a namespace, we can have a completely independent mount points. The processes only see such file system.
  • uts namespace lets each namespace have its own hostname.

Control groups (cgroups)

Control groups, originally contributed by google, lets us manage resource allocation for groups of processes. We can do accounting and resource limiting at group level. We can set the amount of RAM, swap space, cache, CPU etc. for each group. We also can bind a core to a group – a feature useful in multicore systems! Naturally, we can limit number of i/o ops or bytes read or written.

If we map a container to a namespace and the namespace to a control group, we are all set in terms of isolation and resource management.

AUFS (Another Union File System)

Imagine the scenario. You are running in a container. You want to use the base OS facilities: kernel, libraries — except for one package, which you want to upgrade. How do you deal with it?

In a layered file system, you can only create what we want to. These files supersede the files in the base file system. And, naturally, other containers only see base file system and they too can selectively overwrite in their own file system space. All this looks and feels natural – everybody is under the illusion of owning the file system completely. 

image

The benefits are several: storage savings, fast deployments, fast backups, better memory usage, easier upgrades, easy standardization, and ultimate control.

Security

Lot of security patches rolled into one called grsecurity offers additional security:

  • buffer overflow attacks
  • Separation of executable code and writable parts of the code
  • Randomizing the address space
  • Auditing suspicious activity

While none of them are revolutionary, taken together all these steps offer the required security between the containers.

Distributed routing

Let us suppose each person runs their own apache, on port 80. How do they expose that service on that port to outsiders? Remember that in the VM’s, you either get your own IP or you get to hide behind a NAT. If you have your own IP, you get to control your own ports etc.

In the world of containers, this kind of port level magic is bit a more complex. In the end, though, you can setup a bridge between the OS and the container so that the container can share the same internet interface, using a different IP (perhaps granted from DHCP source or, manually set).

A more complex case is, if you are running 100’s of containers, is there a way to offer better throughput for the service requests? Specifically, if you are running web applications? That question can be handled by using standard HTTP routers (like ngnix etc).

Conclusion

If you followed me so far, you learnt:

  1. What containers are?
  2. How do they logically relate to VM’s?
  3. How do they logically extend Unix?
  4. What the choices are?
  5. What are the technical underpinnings behind containers?
  6. How do we get started with containers?

Go forth and practice!

Sep 052011
 

With more than 20,000 participants and 400 exhibitors, VMWorld is the place to be for people interested in virtualization, cloud, and new trends in IT, as driven by fast changes in infrastructure. Of course, with $3000 per head, it is also one of the most expensive conferences out there. I, along with VMware’s Thirumalesh, presented on migrating to cloud-ready platforms from Weblogic:

First, the general trends

VMworld 2011 at Las Vegas

It is clear VMware spawned one of the biggest eco-systems in the world.

To start with, all VMware has is the ability to virtualize a machine. That is, it can run a program on a machine so that the machine can appear to be multiple machines or VM’s. It means, on one server or PC, we can run multiple OS’s.

Next up, they provide the ability to take an application and OS and bundle it as a thin VM (just enough OS to make that app run). That means, all the issues of porting to different OS’s and incompatibilities of OS’s disappear.

Next, they moved along the direction of managing the VM’s. They can manage capacity leveling, load balancing, and sharing across the VM’s. They can manage a cluster of host machines running several VM’s. For instance, they can migrate one VM from one server to another server. The potential management possibilities are enormous. Your server going down? Just move your VM to another server and be done with it. Or, there is a hurricane coming your way? Move your VM to a server in a different country.

And, of course, if you have the capability of capturing, restarting, replicating the computer, then you can do lot of other interesting magic: horizontal scaling, recovery, and backups. Of course, lot of these capabilities need support at various levels: computing, storage, and networking.

The traditional way of viewing the virtual server market: Compute, Storage, and Networking is maturing progressively. There are lot companies that are solving some problem in the management, provisioning, operations, and decision making in the virtual infrastructure area.

One interesting trend is virtualizing at the device level. VMware is working on technologies to push a virtual device onto android or iPad (it is bit more convoluted – it is actually an interface) so that people can own their own devices and yet access company resources with a clear wall between personal and company. If a person leaves the company, a central admin can de-provision without having to be there physically.

octupusOn one hand, the trend of managing the personal devices for the company is exciting. Still, I have mixed feelings about it (I think it is encroaching on the capabilities of the web – why are we going down this path anyway?). Watch out this space for a detailed blog about my perspective on that.

Next, journey to the cloud

Stick figure guide to cloud computing from Tier 3

As interesting as the the infrastructure is (and lucrative too), VMware is spending lot of time and money on the cloud platforms. For them, it is a part of the strategy to get the organizations move to the cloud. They invested heavily in SpringSource platform under vfabric. They are also investing in other platforms to make sure that they run better in VMware’s cloud. In addition, they are looking at other platforms as well as long as it fits into their cloud vision.

vFabricDiagramOne of the big issues with their platform vision is this: they are not leaders in the platform business. They are not, historically, a player in providing any software stack. Other vendors (IBM, Microsoft, Oracle etc) or other cloud players (SFDC, google) are bigger at this platform business. So, VMware is making a big push towards this platform.

The first impediment towards this platform is that there are lot of applications that companies have developed in standard J2EE platform. These technologies are complex, with too many levers to operate and optimize. It means, for VMware, they are an obstacle to successful movement to cloud. Yes, they can be virtualized, but without many benefits that VMware can offer.

If you are wondering about the benefits, consider one example: VMware’s hypervisor (the application that runs all the VMs) as you know allocated memory to the VM’s. Now if it can know how the machines are using the memory, it can do some dynamic adjustment so that it can manage more VM’s for the same memory. It may be that different machines are peaking at different times. Now, with regular VM’s VMware runs a program at OS level to help with this smoothing out of the demand for memory. If it java program, how can it do that? Enter EM4J – elastic memory for java. It runs only on tcServer, a part of vFabric.

All this means is this: VMware likes to see transition to the vFabric from other platforms so that they can win the cloud war.

This is a short note – soon, I will be writing about my impressions of micro foundry.

 Posted by at 10:28 am  Tagged with: