RandomAnt


Technology + Management + Innovation
27
Dec
2015
DevOps

Three Reasons Why Docker is a Game Changer

by Jake Bennett

Containers represent a fundamental evolution in software development, but not for the reasons most people think.

Image of containers being unloaded


Docker’s rapid rise to prominence has put it on the radar of almost every technologist today, both IT professionals and developers alike. Docker containers hold the promise of providing the ideal packaging and deployment mechanism for microservices, a concept which has also experienced a growing surge in popularity.

But while the industry loves its sparkling new technologies, it is also deeply skeptical of them. Until a new technology has been battletested, it’s just an unproven idea with a hipster logo, so it’s not surprising that Docker is being evaluated with a critical eye—it should be.

To properly assess Docker’s utility, however, it’s necessary to follow container-based architecture to its logical conclusion. The benefits of isolation and portability, which get most of the attention, are reasons enough to adopt Docker containers. But the real game changer, I believe, is the deployment of containers in clusters. Container clusters managed by a framework like Google’s Kubernetes, allow for the true separation of application code and infrastructure, and enable highly resilient and elastic architectures

It is these three benefits in combination—isolation, portability, and container clustering—that are the real reasons why Docker represents such a significant evolution in how we build and deploy software. Containers further advance the paradigm shift in application development brought about by cloud computing by providing a higher layer of abstraction for application deployment, a concept we’ll explore in more detail later.

Is Docker worth it?

However, you don’t get the benefits of containers for free: Docker does add a layer of complexity to application development. The learning curve for Docker itself is relatively small, but it gets steeper when you add clustering. The question then becomes: is the juice worth the squeeze? That is, do containers provide enough tangible benefit to justify the additional complexity? Or are we just wasting our time following the latest fad?

Certainly, Docker is not the right solution for every project. The Ruby/Django/Laravel/NodeJS crowd will be the first to point out that their PaaS-ready frameworks already give them rapid development, continuous delivery and portability. Continuous integration platform provider Circle CI wrote a hilarious post poking fun at Docker, transcribing a fictitious conversation in which a Docker evangelist tries to explain the benefits of containers to a Ruby developer. The resulting confusion about the container ecosystem perfectly captures the situation.

more…

10
Nov
2015
DevOps

Strengthen Your AWS Security by Protecting App Credentials and Automating EC2 and IAM Key Rotation

by Jake Bennett

Effective information security requires following strong security practices during development. Here are three ways to secure your build pipeline, and the source code to get you started.

Keys


One of the biggest headaches faced by developers and DevOps professionals is the problem of keeping the credentials used in application code secure. It’s just a fact of life. We have code that needs to access network resources like servers and databases, and we have to store these credentials somewhere. Even in the best of circumstances this is a difficult problem to solve, but the messy realities of daily life further compound the issue. Looming deadlines, sprawling technology and employee turnover all conspire against us when we try to keep the build pipeline secure. The result is “credential detritus”: passwords and security keys littered across developer workstations, source control repos, build servers and staging environments.

Use EC2 Instance Profiles

A good strategy for minimizing credential detritus is to reduce the number of credentials that need to be managed in the first place. One effective way to do this in AWS is by using EC2 Instance Profiles. An Instance Profile is an IAM Role that is assigned to an EC2 instance when it’s created. Once this is in-place, any code running on the EC2 instance that makes CLI or SDK calls to AWS resources will be made within the security context of the Instance Profile. This is extremely handy because it means that you don’t need to worry about getting credentials onto the instance when it’s created, and you don’t need to manage them on an ongoing basis—AWS automatically rotates the keys for you. Instead, you can spend your time fine tuning the security policy for the IAM Role to ensure that it has the least amount of privileges to get its job done.

Get Credentials Out of Source Control

EC2 Instance Profiles are a big help, but they won’t completely eliminate the need to manage credentials in your application. There are plenty of non-AWS resources that your code requires access to, and EC2 Instance Profiles won’t help you with those. For these credentials, we need another approach. This starts by making sure that credentials are NEVER stored in source control. A good test I’ve heard is this: if you were forced to push your source code to a public repository on GitHub today, then you should be able to sleep well tonight knowing that no secret credentials would be revealed. How well would you sleep if this happened to you?

The main problem is that source code repositories typically have a wide audience, including people who shouldn’t have access to security credentials. And once you check in those credentials, they’re pretty much up there forever. Moreover, if you use a distributed SCM like Git, then those credentials are stored along with your source code on all of your developers’ machines, further increasing your exposure. The more breadcrumbs you leave lying around, the more likely rats will end up infesting your home. This appears to be what happened in the Ashley Madson hack that took place earlier this year.  Hard-coded credentials stored in source control were implicated as a key factor in the attack. Apparently, their source code was littered with the stuff. Not good.

more…

30
Mar
2015
Management

Eight Reasons Why Agile Motivates Project Teams

by Jake Bennett

Research proves what software developers already know: Agile projects are more fun and inspiring to work on. In this article, we review the science that explains why Agile fosters greater motivation.

Scrum Board


A few weeks ago, I finished conducting a series of video retrospectives with several POP team members who recently completed Agile/Scrum projects. The goal of these one-on-one interviews was to elicit the kinds of critical insights that can only be discovered through in-the-trenches experience. By video recording the conversations, it allowed me to quickly distribute these Agile learnings to the larger agency in easy-to-digest bites.

It was great listening to the team talk about their Scrum experiences, but what struck me the most was the universal belief among the people I talked to that Agile projects were more fun and motivating than Waterfall projects. I wouldn’t have considered this a pattern if the people I interviewed had all worked on the same project. But the team members I spoke with worked on a variety of different projects, ranging from e-commerce sites, to mobile apps, to frontend-focused work. Furthermore, the participants came from different departments, including design, development, project management and QA. Yet despite these differences, it was clear that everyone I talked to shared one thing in common: they all had a much higher level of satisfaction and motivation when working on Agile projects. So for me the big question was: Why? What was it about Agile that fostered greater motivation and better performance than a typical Waterfall project?

Money certainly didn’t have anything to do with it. None of the team members I spoke with were compensated any more or less based on their participation on an Agile project. But the fact that money wasn’t the answer didn’t come as a surprise. Decades of research has debunked the myth that money motivates employees, despite corporate America’s obsession with performance-based pay. So if not money then, what?

The truth is that there isn’t any one aspect of Scrum that increases motivation. But when you dig into the research behind employee motivation it becomes pretty clear that there are several aspects of Scrum that do. To better understand why, let’s dive into the research.

1. Setting Goals

One of the most powerful motivators for employees is simply setting clear goals. According to Stephen Robbins, professor and author of The Essentials of Organizational Behavior, the research is definitive—setting goals works: “Considerable evidence supports goal-setting theory. This theory states that intentions—expressed as goals—can be a major source of work motivation. We can say with a considerable degree of confidence that specific goals lead to increased performance; that difficult goals, when accepted, result in higher performance than easy goals; and that feedback leads to higher performance than no feedback.” Fortunately, we’ve already been told that setting actionable goals is good, which is why managers focus on goal-setting during the annual employee review process. But this isn’t enough. Clear and challenging project-based goals occur more frequently and are usually more tangible and satisfying than amorphous yearly career goals. So having employees work on Scrum projects throughout the year can help bolster morale in between the reviews.

more…

13
Jan
2015
DevOps

Using Vagrant, Chef and IntelliJ to Automate the Creation of the Java Development Environment

by Jake Bennett

The long path to DevOps enlightenment begins with the Developer’s IDE: Here’s how to get started on the journey. In this article we walk through the steps for automating the creation of a virtual development environment.

Cloud Laptop Vagrant Chef


One of the challenges faced by software developers today working on cloud applications and distributed systems is the problem of setting up the developer workstation in a development environment comprised of an increasing number of services and technologies. It was already hard enough to configure developer workstations for complex monolithic applications, and now it’s even harder as we start to break down the application into multiple microservices and databases. If you are starting to feel like your developers’ workstations have become fragile beasts that are able to generate builds only by the grace of God and through years of mystery configuration settings, then you are facing trouble. Seek medical help immediately if you are experiencing any of the following symptoms:

  • The onboarding of new developers takes days or even weeks because getting a new development machine configured is a time-consuming and error-prone process.
  • The words “But the code works on my machine” are uttered frequently within your organization.
  • Bugs are often discovered in production that don’t occur in development or staging.
  • The documentation for deploying the application to production is a short text file with a last modified date that’s over a year old.

The good news is that there are technologies and practices to remedy these problems. The long-term cure for this affliction is cultivating a DevOps culture within your organization. DevOps is the new hybrid combination of software development and infrastructure operations. With the rise of virtualization and cloud-computing, these two formerly separate departments have found themselves bound together like conjoined twins. In the cloud, hardware is software, and thus software development now includes infrastructure management.

more…

27
Nov
2014
Cloud Computing

The cloud is more than just a new place to park your app: it’s a paradigm shift in how we build software

by Jake Bennett

Cloud-computing makes possible a new breed of applications that are much more robust and highly tolerant to change. Here are 10 key architectural considerations when developing applications born in the cloud.

Paper Clouds


There was a time, back in the day, when life as a software architect was simple. Release cycles were measured in half-years and quarters. Application traffic was significant, but not insane. Data was always persisted in a centralized, relational database (because that was the only option). And best of all, the applications themselves were hosted on high-end, company-owned hardware managed by a separate operations team. Life back in the day was good.

But then the Internet kept growing.

Release cycles got shorter as the business fought to remain competitive. Traffic continued to grow and huge spikes could happen at any time. The relational database was coming apart at the seams, no matter how much iron was thrown at it. And in the midst of it all, everyone started talking incessantly about this new thing called “the cloud” that was supposed to fix everything. The brief period of easy living had come to end.

But let’s face it: What good architect wants easy living anyway?

A good software architect is driven by intellectual curiosity, and by that measure, now is a great time to be an architect. The technology options and design patterns available today are truly awesome, driven in large part by advances made in cloud-computing.

The Cloud Changes Everything

To fully understand the potential of cloud-computing, we need to look beyond merely migrating existing applications to the cloud and focus instead on what application architecture looks like for new applications born from inception in a cloud environment. The cloud is more than just a new place to park your application: it represents a fundamental paradigm shift in how we build software. Drafting a blueprint for cloud-born applications is obviously helpful for new, blue sky projects, but it benefits legacy, non-cloud applications too. By defining the ideal end-state for cloud-born applications, existing apps are given more than just a migration path to the cloud—they are given a clear path for how to be re-born in the cloud. Hallelujah.

But first let’s define what we mean by “the cloud” since the term is so terribly misused. (Former Oracle CEO Larry Ellison had a great rant on this topic during an interview at the Churchill Club.) When I use the term “cloud computing,” I mean something very specific: applications deployed on a hosting platform, like Amazon Web Services (AWS) or Microsoft Azure, that enables on-demand infrastructure provisioning and billing, in addition to a range of hosting-related services such as specialized storage and security. I don’t mean, as it is often used, anything that is available via HTTP. (It’s entertaining to see how Internet companies have recast themselves over the years to exploit the buzzword du jour, first as ASPs, then as SaaS providers and now as Cloud companies.) This narrower definition of “the cloud” also includes internally managed, highly virtualized data centers running cloud-provisioning software like VMware vCloud Automation Center (now vRealize Automation).

The special alchemy that cloud computing provides for software developers is that it turns hardware into software. It is difficult to overstate how profound this change is for software development. It means that infrastructure is now directly in our control. We can include infrastructure in our solutions, refractor it as we go and even check it into source control—all without ever leaving the comfort of our desks. No need to rack up a new server in the data center, just hop onto AWS Console and provision a couple of EC2 instances in-between meetings. (Or better yet, use an Elastic Load Balancer with an Auto Scaling Group to automatically provision the new servers for you while you drink Piña Coladas on the beach.)

This cloud-computing alchemy also means that cutting edge design patterns pioneered years ago by dotcom heavyweights like Amazon, Ebay and Netflix are now available to the masses. Indeed, many of the architectural considerations discussed here are not really new—they’re just now feasible for the rest of us. Cloud providers have virtualized hosting infrastructure, exposed infrastructure services through APIs, and developed on-demand billing, which means you can now experiment building crazy scalable architectures in your PJs at home. Very cool.

The Ten Commandments of Cloud-Born Application Architecture

Below are ten key tenets for architects to consider when developing applications in this new cloud-born world:

  1. There is No Silver Bullet
  2. Design for Failure
  3. Celebrate Database Diversity
  4. Embrace Eventual Consistency
  5. Move to Microservices
  6. Adopt Asynchronicity
  7. Automate Everything
  8. Design for Security
  9. Architect as Accountant
  10. Solving for Scalability Solves for Frequent Change

more…

5
Oct
2014
Solutions Architecture

Proximity marketing has arrived. Here’s the blueprint for creating a one-to-one digital conversation with your shopper in-store today.

by Jake Bennett

Emerging technologies like iBeacon and Near Field Communication (NFC) have opened up the possibilities for unparalleled in-store interactivity with shoppers. The key is staying focused on using this new tech to actually enhance the shopping experience for the customer.

ibeacon-innovation-team


Emerging in-store positioning technologies like iBeacon hold the promise for highly-personalized, “Minority-Report-like,” marketing programs. However, this technology is still at a very early stage. Retailers who adopt the technology first—and are able to execute it brilliantly—will almost certainly gain a competitive advantage. But the challenge is that it’s not entirely clear what experiences can be created today that actually offers a better shopping experience. Much of what the industry is talking about now centers around using proximity technology to offer coupons to shoppers in-store. I, for one, think we can do a lot better than incessantly pushing discounts to shoppers as they peruse the aisle.

At POP, the innovation team wanted to weed out the hype from the reality by building a real, working prototype using today’s technology to create an in-store shopping experience that didn’t suck. We wanted to build something that added value to the shopping experience for the customer and promoted stronger sales for the retailer.

We also wanted to develop the prototype fast. Not in months, but in weeks. We set three weeks as the goal. We know that the rate of change in retail is crazy, and that we have to move at the same pace. So with a big goal in mind and a short timeline in front of us, we started by formulating a game plan. Our idea required a custom solution, but there wasn’t time to develop everything from scratch, so we needed to establish a “Lego-like” architecture: leverage pre-existing pieces and spend our time putting them together. Our technology Lego-set looked something like this:

  • HTML 5, rather than proprietary animation technologies, for kiosk motion video
  • A standard touch-screen kiosk running Windows and off-the-shelf kiosk security software
  • Cloud services via Amazon Web Services
  • Estimote iBeacons (Technical note: For a production system, it’s imperative to choose a beacon vendor with a security layer included, to avoid security attacks like beacon spoofing. We used Estimotes here for ease of deployment.)

Beacon Components Diagram


 

Everywhere Communication

One of the coolest aspects of beacon technology is its ability to attach the last link of the chain between the shopper and the retail space he’s in. But in order to make this work in a real-world retail setting, you have to get all of the pieces (beacon, mobile app, website and kiosk) to talk together. With that in mind, our team focused on building the communication layer first. That allowed us to prove that the whiteboard sketch could be implemented in practice. The communications architecture employs REST API calls and Web Sockets for the bulk of the inter-component messaging.

more…

7
Sep
2014
Security

Six Practical Steps You Should Take to Protect Yourself from Cyber Criminals

by Jake Bennett

By dissecting the methods used by hackers in the recent wave cyber attacks, we can identify ways to help us stay more secure online.

Binary Key


A rash of cyber attacks and security news hit over the Labor Day weekend, impacting The Home Depot, Healthcare.gov, Goodwill and Apple. But at least this recent flurry of security activity is positive in one respect: it gives us a glimpse into the mechanics of real world attack scenarios.  The more we can use this as a learning opportunity, the safer we’ll be. Here are a few lessons we should take away from the attacks:

1. Understand that even if you do everything right, you’re still not safe

During the first few days of the September iCloud breach, in which explicit pictures of several celebrities were hacked via Apple’s iCloud backup service, many people were saying that the victims should have used two-factor authentication to protect their information (sadly, another example the “blame the victim” mentality). It was later disclosed, however, that Apple’s two-factor authentication didn’t actually cover iCloud backups. So, even if you are one of the rare, paranoid people who use two-factor authentication, it wouldn’t have protected you.

In a similar vein, having the most secure password in the world, wouldn’t have helped the customers of Home Depot or Goodwill, who’s stolen credits cards were used in-store. If the people processing your credit cards get hacked, no amount of cyber protection will save you.

more…

5
Sep
2014
Security

Ukrainian Hacker Strikes Again. Creepy Hacker Community Compromises Apple iCloud.

by Jake Bennett

A wave of high profile security breaches was recently discovered, potentially affecting millions of people. Each attack had a unique footprint, giving us an interesting glimpse into the scary world of cyber crime.

Three Cartoon Hackers


Somewhere in the PR offices of the Goodwill, the Department of Health and Human Services, and The Home Depot, a crisis-management specialist is enjoying a small moment of thanks. On the one hand, they’ve probably had a pretty terrible week, dealing with the press and trying to explain the causes and impacts of major security breaches within their organizations. On the other hand, they are probably considering themselves lucky. They know that the best way to divert attention away from their own crises is for another, more interesting crisis to hit at the same time.  Fortunately for them, their unspoken prayers were answered. At the same time stories broke about their breaches, it was revealed that naked photographs of high profile, female celebrities were stolen from Apple’s iCloud service.  Hacking + Apple + celebrities + naked selfies = a four-of-a-kind in the tech news world, and trumps even news about a security breach that might be bigger than Target’s 2013 attack. Let’s face it, Jennifer Lawrence has a lot more charisma than Home Depot credit card numbers.

Although this string of hacks might have been an unexpected deus ex machina for a few lucky PR professionals, for the rest of us, it’s a really scary series of events that forces us to take a step back and ask the question: is anything safe online? Let’s review each of these breaches and see what we can learn from them so we can be better protected ourselves in cyber space.

more…

19
Aug
2014
Robotics

Self-Organizing Kilobots Attack!

by Jake Bennett

Harvard University recently developed swarm-intelligent micro-bots that can self-organize and accomplish simple tasks. This is a great illustration of the possibilities of emergent phenomenon.

Harvard researchers developed a system of 1,024 micro-robots that move using vibration and can self-organize to accomplish simple tasks, like forming the shape of a wrench or a star. The swarm system is based on biological systems (like ants!) who display complex behavior by following a handful of simple rules. The feat was considered a breakthrough due to the large number of bots in the swarm. Previous micro-bot swarms were less than 100.

I can’t wait to see how big these colonies can get, and how complex their work will become. Ant colonies can grow up to 1 million in size, so the micro/nano-bots have a ways to go before they catch-up with their biological brethren.

17
Aug
2014
Security

CIA’s Top Security Innovator Proposes Some Ideas That Are Crazy Enough to Work

by Jake Bennett

Dan Geer, the top security chief at the CIA’s VC firm In-Q-Tel, gave a thought provoking keynote at this year’s Black Hat security conference, arguing that thoughtful government regulation was the best hope for shoring up our cyber defense. He may just be right.

The Iconoclast

Dan Geer has never been one to walk away from a fight. In 2003, he was fired from security firm @Stake after authoring a report released by the Computer and Communications Industry Association arguing that Microsoft’s monopoly over of the desktop was a national security threat. Given that Microsoft was a client of @Stake at the time, it’s not a shocker that he didn’t make employee of the month. Somewhat humorously, in an interview with Computerworld after the incident, Dan remarked, “It’s not as if there’s a procedure to check everything with marketing.”  Somehow I think a guy with degrees from MIT and Harvard didn’t need to check-in with marketing to gauge what his firm’s reaction to the paper would be.

Fortunately for the Black Hat audience (and those of us who watched the presentation online), Dan continued to live up to his reputation. He outlined a 10-point policy recommendation (well summarized here) for improving cyber security. In the preamble leading up to the policy recommendations, he made two key points that provide critical support for his policy argument:

  1. The pace of technology change is happening so quickly now that security generalists can no longer keep up. Highly specialized security experts and governments are now needed to protect our information assets.
  1. If you want to increase information security, you have to be pragmatic and willing to make compromises. As Dan succinctly put it: “In nothing else is it more apt to say that our choices are Freedom, Security, Convenience—Choose Two.”

These points are important to keep in mind when listening to his presentation because they provide critical context for his potentially unpalatable policy recommendations.

To Regulate or Not to Regulate

As a card-carrying capitalist, I’m naturally wary of government technology regulation. But as a digital technologist I’m absolutely terrified of it.

more…