Balancing Control and Agility in Today’s IT Operational Reality

How can IT Departments balance control and agility in today’s IT operational reality? For decades, IT Operations has viewed itself as the controlling influence on the “wild west” of business influences. We have had to create our own culture of control in order to extend our influence beyond the four hardened walls of the datacenter, and now the diaphanous boundaries of the Cloud. Control was synonymous with good IT hygiene, and we prided ourselves in this. It’s not by accident that outside of the IT circles, we were viewed as gatekeepers and traffic cops, regulating the use (and hopefully abuse) of valuable IT resources and critical data sets. Many of us built our careers on a foundation of saying “no,” or, for those of us with less tact, “are you crazy?”

That was then, when we were the all-seeing, god-like nocturnal creatures operating in the dark of server rooms and wiring closets. Our IT worlds have changed dramatically since those heady days of power and ultimate dominion over our domain(s). I mean, really, we actually created something called Domains so the non-IT peasant-class could work in our world easier, and we even have our own Internet Hall of Fame!

Now, life is a little different. IT awareness has become more mainstream, and innovation is actually happening at a faster pace in the consumer market.  We are continually being challenged by the business, but in a different and more informed manner than in our old glory days. We need to adapt our approach, and adjust our perspective in order to stay valued by the business. My colleague John Dixon has a quality ebook around the evolution of the corporate IT department that I would highly recommend taking a look at.

This is where Agility comes into play. Think of what it takes to become agile.  It takes both a measure of control, and a measure of flexibility. They seem to be odd roommates. But in actuality, they feed off each other, balance one-another. Control is how you keep chaos out of agility, and agility is how you keep control from becoming too restraining.

Mario Andretti has a great quote about control: “If everything seems under control, you’re just not going fast enough.” And this is where the rub is in today’s business climate. We are operating at faster speeds and shorter times-to-market than ever before. Competition is global and not always above-board or out in the open. The raw number of influences in our customer base have exponentially increased.  We have less “control” over our markets now, and by nature have to become more “agile” in our progress.

IT operations must become more agile to support this new reality. Gone are the days of saying “not on my platform”, or calling the CIO the CI-NO. To become more agile, we need to enable our teams to spend more time on innovation than on maintenance.

So what needs to change? Well, first, we need to give our teams back some of the Time and Energy they are spending in maintenance and management functions. To do this, we need to drive innovations in that space, and think about lowest cost of delivery for routine IT functions. To some this means outsourcing, to others it’s about better automation and collaboration. If we can offload 50-70% of the current maintenance workload from our teams, our teams can then turn their attention away from the rear-view mirror and start looking for the next strategic challenge. A few months back I did a webinar around how IT departments can modernize their IT operations by killing the transactional treadmill.

Once we have accomplished this, we then need to re-focus their attention to innovating for the business.  This could be in the form of completing strategic projects or enhancing applications and services that drive revenue. Beyond the obvious benefits for the business, this re-focus on innovation will create a more valuable IT organization, and generally more invested team members.

With more time and energy focused on innovation, we need to now create new culture within IT around sharing and educating. IT teams can no longer operate in silos effectively if they are truly to innovate.  We have to remove the boundaries between the IT layers and share the knowledge our teams gather with the business overall.  Only then can the business truly see and appreciate the advances IT is making in supporting their initiatives.

To keep this going long term you need to adjust your alignment towards shared success, both within IT and between IT and the rest of the organization. And don’t forget your partners, those that are now assisting with your foundational operations and management functions. By tying all of them together to a single set of success criteria and metrics, you will enforce good behavior and focus on the ultimate objective – delivery of world class IT applications and services that enable business growth and profitability.

Or, you could just stay in your proverbial server room, scanning error logs and scheduling patch updates.  You probably will survive.  But is survival all you want?

 

By Geoff Smith, Senior Manager, Managed Services Business Development

The top cloud computing threats and vulnerabilities in an enterprise environment

Picture credit: iStockPhoto

Analysis I’ve seen different companies with operational models 90% based on cloud services, where the rest of the 10% is constituted of in-house servers. The basic response after asking about security issues related to cloud services was that the cloud service provider will take care of them and they don’t have to worry about it.

This isn’t necessarily the case with every cloud service provider, since some CSPs have a good security model in place, while others clearly do not. There are many advantages of cloud services, which is why the cloud service model is being used extensively, but they are out of scope of this article.

Before continuing, let’s quickly describe the difference between a threat and a vulnerability we’ll be using throughout the article:

Vulnerability: is a weakness that can be exploited by the attacker for his own personal gain. A weakness can be present in software, environments, systems, network, etc.

Threat: is an actor who wants to attack assets in the cloud at a particular time with a particular goal in mind, usually to inflict his own financial gain and consequentially financial loss of a customer.

Cloud computing vulnerabilities

When deciding to migrate to the cloud, we have to consider the following cloud vulnerabilities:

Session Riding: Session riding happens when an attacker steals a user’s cookie to use the application in the name of the user. An attacker might also use CSRF attacks in order to trick the user into sending authenticated requests to arbitrary web sites to achieve various things.

Virtual Machine Escape: In virtualized environments, the physical servers run multiple virtual machines on top of hypervisors. An attacker can exploit a hypervisor remotely by using a vulnerability present in the hypervisor itself – such vulnerabilities are quite rare, but they do exist. Additionally, a virtual machine can escape from the virtualized sandbox environment and gain access to the hypervisor and consequentially all the virtual machines running on it.

Reliability and Availability of Service: We expect our cloud services and applications to always be available when we need them, which is one of the reasons for moving to the cloud. But this isn’t always the case, especially in bad weather with a lot of lightning where power outages are common. The CSPs have uninterrupted power supplies, but even those can sometimes fail, so we can’t rely on cloud services to be up and running 100% of the time. We have to take a little downtime into consideration, but that’s the same when running our own private cloud.

Insecure Cryptography: Cryptography algorithms usually require random number generators, which use unpredictable sources of information to generate actual random numbers, which is required to obtain a large entropy pool. If the random number generators are providing only a small entropy pool, the numbers can be brute forced. In client computers, the primary source of randomization is user mouse movement and key presses, but servers are mostly running without user interaction, which consequentially means lower number of randomization sources. Therefore the virtual machines must rely on the sources they have available, which could result in easily guessable numbers that don’t provide much entropy in cryptographic algorithms.

Data Protection and Portability: When choosing to switch the cloud service provider for a cheaper one, we have to address the problem of data movement and deletion. The old CSP has to delete all the data we stored in its data center to not leave the data lying around.

Alternatively, the CSP that goes out of the business needs to provide the data to the customers, so they can move to an alternate CSP after which the data needs to be deleted. What if the CSP goes out of business without providing the data? In such cases, it’s better to use a widely used CSP which has been around for a while, but in any case data backup is still in order.

CSP Lock-in: We have to choose a cloud provider that will allow us to easily move to another provider when needed. We don’t want to choose a CSP that will force us to use his own services, because sometimes we would like to use one CSP for one thing and the other CSP for something else.

Internet Dependency: By using the cloud services, we’re dependent upon the Internet connection, so if the Internet temporarily fails due to a lightning strike or ISP maintenance, the clients won’t be able to connect to the cloud services. Therefore, the business will slowly lose money, because the users won’t be able to use the service that’s required for the business operation. Not to mention the services that need to be available 24/7, like applications in a hospital, where human lives are at stake.

Cloud computing threats

Before deciding to migrate to the cloud, we have to look at the cloud security vulnerabilities and threats to determine whether the cloud service is worth the risk due to the many advantages it provides. The following are the top security threats in a cloud environment:

Ease of Use: The cloud services can easily be used by malicious attackers, since a registration process is very simple, because we only have to have a valid credit card. In some cases we can even pay for the cloud service by using PayPal, Western Union, Payza, Bitcoin, or Litecoin, in which cases we can stay totally anonymous. The cloud can be used maliciously for various purposes like spamming, malware distribution, botnet C&C servers, DDoS, password and hash cracking.

Secure Data Transmission: When transferring the data from clients to the cloud, the data needs to be transferred by using an encrypted secure communication channel like SSL/TLS. This prevents different attacks like MITM attacks, where the data could be stolen by an attacker intercepting our communication.

Insecure APIs: Various cloud services on the Internet are exposed by application programming interfaces. Since the APIs are accessible from anywhere on the Internet, malicious attackers can use them to compromise the confidentiality and integrity of the enterprise customers. An attacker gaining a token used by a customer to access the service through service API can use the same token to manipulate the customer’s data. Therefore it’s imperative that cloud services provide a secure API, rendering such attacks worthless.

Malicious Insiders: Employees working at cloud service provider could have complete access to the company resources. Therefore cloud service providers must have proper security measures in place to track employee actions like viewing a customer’s data. Since cloud service provides often don’t follow the best security guidelines and don’t implement a security policy, employees can gather confidential information from arbitrary customers without being detected.

Shared Technology Issues: The cloud service SaaS/PasS/IaaS providers use scalable infrastructure to support multiple tenants which share the underlying infrastructure. Directly on the hardware layer, there are hypervisors running multiple virtual machines, themselves running multiple applications.

On the highest layer, there are various attacks on the SaaS where an attacker is able to get access to the data of another application running in the same virtual machine. The same is true for the lowest layers, where hypervisors can be exploited from virtual machines to gain access to all VMs on the same server (example of such an attack is Red/Blue Pill). All layers of shared technology can be attacked to gain unauthorized access to data, like: CPU, RAM, hypervisors, applications, etc.

Data Loss: The data stored in the cloud could be lost due to the hard drive failure. A CSP could accidentally delete the data, an attacker might modify the data, etc. Therefore, the best way to protect against data loss is by having a proper data backup, which solves the data loss problems. Data loss can have catastrophic consequences to the business, which may result in a business bankruptcy, which is why keeping the data backed-up is always the best option.

Data Breach: When a virtual machine is able to access the data from another virtual machine on the same physical host, a data breach occurs – the problem is much more prevalent when the tenants of the two virtual machines are different customers. The side-channel attacks are valid attack vectors and need to be addressed in everyday situations. A side-channel attack occurs when a virtual machine can use a shared component like processor’s cache to access the data of another virtual machine running on the same physical host.

Account/Service Hijacking: It’s often the case that only a password is required to access our account in the cloud and manipulate the data, which is why the usage of two-factor authentication is preferred. Nevertheless, an attacker gaining access to our account can manipulate and change the data and therefore make the data untrustworthy. An attacker having access to the cloud virtual machine hosting our business website can include a malicious code into the web page to attack users visiting our web page – this is known as the watering hole attack. An attacker can also disrupt the service by turning off the web server serving our website, rendering it inaccessible.

Unknown Risk Profile: We have to take all security implications into account when moving to the cloud, including constant software security updates, monitoring networks with IDS/IPS systems, log monitoring, integrating SIEM into the network, etc. There might be multiple attacks that haven’t even been discovered yet, but they might prove to be highly threatening in the years to come.

Denial of Service: An attacker can issue a denial of service attack against the cloud service to render it inaccessible, therefore disrupting the service. There are a number of ways an attacker can disrupt the service in a virtualized cloud environment: by using all its CPU, RAM, disk space or network bandwidth.

Lack of Understanding: Enterprises are adopting the cloud services in every day operations, but it’s often the case they don’t really understand what they are getting into. When moving to the cloud there are different aspects we need to address, like understanding how the CSP operates, how the application is working, how to debug the application when something goes wrong, whether the data backups are already in place in case the hard drive dies, etc. If the CSP doesn’t provide additional backup of the data, but the customer expects it, who will be responsible when the hard drive fails? The customer will blame the CSP, but in reality it’s the customer’s fault, since they didn’t familiarize themselves enough with the cloud service operations – the result of which will be lost data.

User Awareness: The users of the cloud services should be educated regarding different attacks, because the weakest link is often the user itself. There are multiple social engineering attack vectors that an attacker might use to lure the victim into visiting a malicious web site, after which he can get access to the user’s computer. From there, he can observe user actions and view the same data the user is viewing, not to mention that he can steal user’s credentials to authenticate to the cloud service itself. Security awareness is an often overlooked security concern.

Conclusion

When an enterprise company wants to move their current operation to the cloud, they should be aware of the cloud threats in order for the move to be successful. We shouldn’t rely on the cloud service provider to take care of security for us; rather than that, we should understand the security threats and communicate with our CSP to determine how they are addressing the security threats and continue from there.

We should also create remote backups of our data regardless of whether the CSP is already providing backup service for us – it’s better to have multiple data backups than figure out the data was not backed up at all when the need for data restoration arises.

IBM rolls out dedicated BlueMix platform, gives customers more DevOps options

Picture credit: iStockPhoto

IBM has announced the rollout of new cloud-based DevOps services to enable enterprises to develop software faster, as well as launching a single-tenant version of its platform as a service (PaaS) Bluemix.

The aim with DevOps is to apply agile development principles to the larger, slower-moving enterprise market. With the cloud-based offering from IBM, organisations can now utilise collaborative lifecycle management either on-premise or as a managed service, as well as reduce time and cost when testing to the cloud.

“Software success is increasingly indistinguishable from business success,” said Kristof Kloeckner, IBM software group general manager. “IBM is helping clients harness the collaborative power of the cloud to deliver business outcomes that can compete on the highest levels of agility, speed and collaboration, regardless of the current size or complexity of the organisation.”

Big Blue also described two of its clients who were utilising DevOps techniques – and increasing their output and productivity as a result. Nationwide Mutual Insurance reduced critical software defects by 80% in 18 months, resulting in 20% efficiency gains, whilst exam setting firm Pearson VUE is able to continually improve its test-taking experience through predictive analytics.

“The true value of DevOps is not just in efficiency,” said Steve Farley, vice president of application development at Nationwide. “We also need to anticipate and adapt to market changes and demands with speed, incorporating feedback more frequently to improve value to customers.”

Communications provider CenturyLink is also using DevOps in its cloud team, with CTO of their cloud division Jared Wray telling CloudTech of the new office layout, including collaborative spaces and ‘fishbowl’ rooms, so developers and operations can liaise with one another, improving efficiency.

Bluemix, announced as a developer-friendly PaaS earlier this year, is now a dedicated service, a collaborative, cloud-based platform in a single tenant environment, with available features including data caching, runtimes, to give developers flexibility to run their apps in the language of their choice, and Cloudant’s database as a service.

Bluemix is also compatible with SoftLayer’s IaaS, again solidifying the partnership between the two, which IBM snapped up last year. SoftLayer CEO Lance Crosby was profiled in a Bloomberg piece asking whether he could speed up Big Blue’s transition to the cloud. “There’s more smart people than I’ve ever met, but not necessarily cloud smart,” Crosby says of his colleagues.

IBM is also opening up a Bluemix Garage in Level39 in London, a collaboration area for developers to talk about the cloud and complementing the current one in San Francisco.

You can find out more about the new series of products here.

ITAR Compatibility Delivered by @Infor | @CloudExpo [#Cloud]

Infor has announced a new feature Infor CloudSuite™ Aerospace & Defense (A&D) to aid compliance with International Traffic in Arms Regulations (ITAR). The ITAR function will serve as a complementary function for new or existing Infor CloudSuite A&D customers, to facilitate compliance for Infor customers that are creating a US defense article or performing a US defense service and wish to benefit from cloud-services.

The ITAR regulation serves to manage handling and access requirements for data and physical equipment, which involves the design, manufacture and delivery of weapons systems. This law creates a vital compliance factor for any organization that is responsible for the creation, distribution and trade of weapons. However, it also creates a potential layer of difficulty for such organizations that seek to deploy cloud services, as the law may impact third-party organizations that handle ITAR-related data. For this reason, Infor has invested in the controls required to help customers demonstrate ITAR compliance for CloudSuite Aerospace & Defense as required by the Directorate of Defense Trade Controls (DDTC).

read more

What Is a Digital Professional? By @TheEbizWizard | @CloudExpo [#BigData]

I’ve started using the phrase digital professional, in particular in my recent article dinging Amazon’s cloud division Amazon Web Services (AWS) for not having a clear digital strategy. However, I haven’t been very clear on what I mean, so it’s time to put a finer point on this terminology.

The role of digital professional begins back in the 1990s with the rise of the web professional: people who worked on web sites in some capacity. Some of them were technical, able to sling HTML or JavaScript or perhaps Perl back in the day.

Other designated as creatives, including graphic designers and the like. A third subset of the web professional community were the marketing people: hammering out web strategies, focusing on key performance indicators like conversions and churn, and figuring out how to communicate to users using this newfangled World Wide Web of ours.

And finally, there were the information architects, designing page flows and form interactions, making sure site maps made sense and navigation worked as expected.

read more

Cochlear Selects @AppZero_Inc for WS2003 Migrations [#Cloud]

Cochlear Limited, the global leader in implantable hearing solutions, has selected AppZero for easy migration of its Microsoft Windows Server 2003 applications, a major priority for IT organizations before Microsoft ends support on July 14, 2015. AppZero software enables server application migration from old operating systems to new platforms or clouds and has been proven to be ten times faster, more reliable and efficient than alternative approaches.

read more

What to move to the cloud: A more mature model for SMEs

Picture credit: iStockPhoto

By Chris Chesley, Solutions Architect

Many SMBs struggle with deciding if and what to move to the cloud. Whether it’s security concerns, cost, or lack of expertise, it’s often difficult to map the best possible solution. Here are eight applications and services to consider when your organization is looking to move to the cloud and reduce their server footprint.

What to move to the cloud

1. Email

Obviously in this day and age email is a requirement in virtually every business. A lot of businesses continue to run Exchange locally. If you are thinking about moving portions of your business out to the cloud, email is a good place to start. Why should you move to the cloud?

Simple, it’s pretty easy to do and at this point it’s been well documented that mail runs very well up in the cloud. It takes a special skill set to run Exchange beyond just adding and managing users. If something goes wrong and you have an issue, it can often be very complicated to fix. It can also be pretty complicated to install. A lot of companies do not have access to high quality Exchange skills.

Moving to the cloud solves those issues.  Having Exchange in the cloud also gets your company off of the 3-5 year refresh cycle for the hardware to run Exchange as well as the upfront cost of the software.

Quick Tip – Most cloud e-mail providers offer Anti-Spam/Anti-virus functionality as part of their offering. You can also take advantage of cloud based AS/AV providers like MacAfee’s MXLogic.

2. File shares

Small to medium sized businesses have to deal with sharing files securely and easily among its users. Typically, that’s a file server running locally in your office or at multiple offices. This can present a challenge of making sure everyone has the correct access and that there is enough storage available.

Why should you move to the cloud? There are easy alternatives in the cloud to avoid dealing with those challenges. Such alternatives include Microsoft OneDrive, Google Drive or using a file server in Microsoft Azure. In most cases you can use Active Directory to be the central repository of rights to manage passwords and permissions in one place.

Quick Tip – OneDrive is included with most Office 365 subscriptions. You can use Active Directory authentication to provide access through that.

3. Instant messaging/online meetings

This one is pretty self-explanatory. Instant messaging can often be a quicker and more efficient form of communication than email. There are many platforms out there that can be used including Microsoft Lync, Skype and Cisco Jabber. A lot of these can be used for online meetings as well including screen sharing. Your users are looking for these tools and there are corporate options. With a corporate tool like Lync or Jabber, you can be in control. You can make sure conversations get logged, are secure and can be tracked. Microsoft Lync is included in Office 365.

Quick Tip – If you have the option, you might as well take advantage of it!

4. Active Directory

It is still a best practice to keep an Active Directory domain controller locally at each physical location to speed the login and authentication process even when some or most of the applications are services are based in the Cloud. This still leaves most companies with an issue if their site or sites are down for any reason.  Microsoft now has provided the ability to run a domain controller in their Cloud with Azure Active Directory to provide that redundancy that many SMBs do not currently have.

Quick Tip – Azure Active Directory is pre-integrated with Salesforce, Office 365 and many other applications. Additionally, you can setup and use multi-factor authentication if needed.

5. Web servers

Web servers are another very easy workload to move to the cloud whether it’s Rackspace, Amazon, Azure, VMware etc. The information is not highly confidential so there is a much lower risk than putting extremely sensitive data up there. By moving your servers to the cloud, you can avoid all the traffic from your website going to the local connection; it can all go to the cloud instead.

Quick Tip – most cloud providers offer SQL server back-ends as part of their offerings. This makes it easy to tie in the web server to a backend database. Make sure you ask your provider about this.

6. Backup 

A lot of companies are looking for alternate locations to store backup files. It’s easy to back up locally on disk or tape and then move offsite. It’s often cheaper to store in the cloud and it helps eliminate the headache of rotating tapes.

Quick Tip – account for bandwidth needs when you start moving backups to the cloud. This can be a major factor.

7. Disaster recovery

Now that you have your backups offsite, it’s possible to have capacity to run virtual machines or servers up in the cloud in the event of a disaster. Instead of moving data to another location you can pay to run your important apps in the cloud in case of disaster. It’s usually going to cost you less to do this.

Quick Tip – Make sure you look at your bandwidth closely when backing up to the cloud. Measure how much data you need to backup, and then calculate the bandwidth that you will need.  Most enterprise class backup applications allow you to throttle the backups so they do not impact business.

8. Applications

A lot of common applications that SMBs use are offered as a cloud service. For example, Salesforce and Microsoft Dynamics. These companies make and host the product so that you don’t have to onsite. You can take advantage of the application for a fraction of the cost and headache.

In conclusion, don’t be afraid to move different portions of your environment to the cloud. For the most part, it’s less expensive and easier than you may think. If I was starting a business today, the only thing I would have running locally would be an AD controller or file server. The business can be faster and leaner without the IT infrastructure overhead that one needed to run a business ten years ago.

Looking for more tips? Download this whitepaper written by our CTO Chris Ward “8 Point Checklist for a Successful Data Center Move

Get the Most Out of #SDN By @Riverbed | @CloudExpo [#Cloud]

Fundamentally, SDN is still mostly about network plumbing. While plumbing may be useful to tinker with, what you can do with your plumbing is far more intriguing. A rigid interpretation of SDN confines it to Layers 2 and 3, and that’s reasonable. But SDN opens opportunities for novel constructions in Layers 4 to 7 that solve real operational problems in data centers. «Data center,» in fact, might become anachronistic – data is everywhere, constantly on the move, seemingly always overflowing. Networks move data, but not all networks are suitable for all data.

read more

An @AppNeta Review By @TheEbizWizard | @CloudExpo [#Cloud]

Up to this point in time, cloud-base Application Performance Management (APM) vendor AppNeta has focused on the “big five” application development environments: Java, .NET, php, Python, and Ruby. Recently they added Node.js to the list. As a result, their TraceView product can now monitor well over 90% of the modern code in production today.

However, there is more to the addition of Node.js to this list than simply rounding out the monitoring capabilities of TraceView. In fact, Node.js plays an important part of today’s real-time digital world. To fully understand the importance of monitoring Node.js, therefore, we must place this tool into context.

read more

Amazon Web Services’ Value Chain By @TheEbizWizard | @CloudExpo [#Cloud]

Amazon Web Services (AWS), the Amazon.com cloud computing juggernaut, wrapped up its annual reInvent conference in Las Vegas last week. Host to many thousands of AWS devotees, hundreds of exhibiting partners, and dozens of press and analysts – but not myself. You see, I wasn’t invited.

Not for want of trying, mind you. It didn’t matter that I had attended last year as an analyst, or that I write about them regularly, or even that my opinion of them has generally been quite favorable. You’d think that with my new role as contributor to Forbes on the topic of digital transformation, I’d be a shoe-in for one of the coveted press/analyst passes. Didn’t happen.

read more