DevOps is all the rage these days and with good reason as it promises to reduce the time-to-market for new applications. It also promises to improve change management, allowing teams to deploy changes to their applications quickly and efficiently. However, DevOps isn’t something you buy, install, or implement; rather it is the symptom of an appropriate organizational system. In his session at DevOps Summit, Mark Thiele, EVP, Data Center Technologies at SUPERNAP International, discussed how to get to the right organizational model that will allow DevOps practices to flourish.
Monthly Archives: January 2017
[slides] @Docker Deployments | @DevOpsSummit @JPetazzo #SDN #DevOps
Thanks to Docker, it becomes very easy to leverage containers to build, ship, and run any Linux application on any kind of infrastructure. Docker is particularly helpful for microservice architectures because their successful implementation relies on a fast, efficient deployment mechanism – which is precisely one of the features of Docker. Microservice architectures are therefore becoming more popular, and are increasingly seen as an interesting option even for smaller projects, instead of being reserved to the largest, most complex application stacks.
[session] Databases in @Docker | @DevOpsSummit @Ocean9Inc #DevOps
Hardware virtualization and cloud computing allowed us to increase resource utilization and increase our flexibility to respond to business demand. Docker Containers are the next quantum leap – Are they?! Databases always represented an additional set of challenges unique to running workloads requiring a maximum of I/O, network, CPU resources combined with data locality.
[slides] The Speed of #DevOps | @DevOpsSummit @MHExcalibur @Spirent #CD
The speed of software changes in growing and large scale rapid-paced DevOps environments presents a challenge for continuous testing. Many organizations struggle to get this right. Practices that work for small scale continuous testing may not be sufficient as the requirements grow.
In his session at DevOps Summit, Marc Hornbeek, Sr. Solutions Architect of DevOps continuous test solutions at Spirent Communications, explained the best practices of continuous testing at high scale, which is relevant to small scale DevOps, and if there is an expectation of growth as the number of build targets, test topologies and delivery topologies that need to be orchestrated rapidly grow.
ProtectWise Gets Another Round of Funding
A lot of innovation is happening in the cloud, and much of it is coming from startups. Fortunately, venture capitalists understand the importance of cloud innovation and the role of startups in a cloud ecosystem, and this is why these startups get a reasonable amount of funding to continue their operations. The latest cloud startup to get a $25 million funding is ProtectWise, a cloud security company based in Denver, Colorado. Investors who participated in this round of funding include Top Tier Capital Partners, Tola Capital, Arsenal venture Partners, and a few other unnamed venture capitalists. With this round of funding, the total amount of capital raised by the company so far has touched $67 million.
This company was founded by Scott Chasin and Gene Stevens in 2013 to provide the highest possible levels of visibility on any network. With about 70 employees, this company has grown rapidly over the last three years. Its flagship product, ProtectWise Grid, records all networking activity that happens in your organization’s internal and external network, including the cloud system where your data is stored and the network through which it is accessed. This information is indexed and analyzed to identify any breaches, or even threats, so that it can be fixed at the earliest. In many ways, this product acts as the CCTV camera of your organization’s network, and it can even be rewound to see how and when a hacker entered your network.
Such information can be invaluable in today’s business network, as there is a marked increase in the number of hacking incidents. According to a report titled Global State of Information Security, and released by Pricewaterhouse Coopers, there has been a 38 percent increase in breaches during 2016 when compared to the previous year. This includes a 28 percent increase in cloud architecture and mobile device breaches. This situation is not expected to improve in 2017, according to the same report, and this means, businesses are looking for ways to protect their network in every way possible.
Given this scenario, it’s no surprise that companies like ProtectWise are getting the funding to continue their operations. Though this approach to cyber security has been tried in the past, what makes ProtectWise Grid unique is that all this protection happens in the cloud, as this is where visibility is the lowest. Typically, network recording appliances in the cloud are stored on large storage systems that takes many months to analyze. As a result, it’s hard to find breaches as they occur, and sometimes, it’s identified only after much data has been lost. To avoid such a situation, ProtectWise is based on advanced algorithms that analyze massive amounts of data within a short time, so breaches and vulnerabilities can be identified at the earliest. It breaks down complex data into small and manageable blocks, so that it’s easy to analyze.
Let’s hope this product can break many of the barriers that exist in cloud security and provide us with a clean and efficient system for protecting data.
The post ProtectWise Gets Another Round of Funding appeared first on Cloud News Daily.
US Army signs $62 million deal to move to IBM’s cloud
(c)iStock.com/DanielBendjy
IBM has secured a major cloud customer win in the form of the US Army with a potential $62 million (£50.3m) contract over five years, the company has announced.
The deal, which is designed for the Army’s Redstone Arsenal in Alabama, is for the Army Private Cloud Enterprise (APCE) program and comes under the Army Private Cloud 2 (APC2) contract, which is an ‘indefinite delivery’ deal which started on December 31 last year and expires exactly five years later.
Alongside building the infrastructure, the deal will also see infrastructure as a service (IaaS) offerings provided by IBM. The Armonk giant expects the Army to migrate up to 35 of its current applications to the private cloud in the coming 12 months.
“With this project, we’re beginning to bring the IT infrastructure of the US Army into the 21st century,” said Lt. Gen. Robert Ferrell, US Army CIO in a statement. “Cloud computing is a game-changing architecture that provides improved performance with high efficiency, all in a secure environment.”
“Clients today are increasingly looking at the cloud as a pathway to innovation,” said Sam Gordy, IBM US federal general manager. “This IBM Cloud solution will provide the Army with greater flexibility and will go a long way toward mitigating, and, in some cases eliminating, the security challenges inherent with multiple ingress and egress points.”
IBM’s classification of impact level 5 (IL-5) from the Defense Information Systems Agency (DISA) enables this project to go ahead. The Army expects IBM to achieve DISA IL-6 certification in the coming year; the IL-6 is the highest possible level of security and enables IBM to work with classified information up to ‘secret’.
This move marks a significant change from just four years ago; back in April 2013, a report from the Inspector General of the United States Department of Defense argued that the Army’s previous CIO – Ferrell was nominated for the role in December of that year – was unaware of more than 14,000 mobile devices used throughout the organisation.
The deal will cost $62m ‘if the Army exercises all options’, IBM added.
US Army signs $62 million deal to move to IBM’s cloud
(c)iStock.com/DanielBendjy
IBM has secured a major cloud customer win in the form of the US Army with a potential $62 million (£50.3m) contract over five years, the company has announced.
The deal, which is designed for the Army’s Redstone Arsenal in Alabama, is for the Army Private Cloud Enterprise (APCE) program and comes under the Army Private Cloud 2 (APC2) contract, which is an ‘indefinite delivery’ deal which started on December 31 last year and expires exactly five years later.
Alongside building the infrastructure, the deal will also see infrastructure as a service (IaaS) offerings provided by IBM. The Armonk giant expects the Army to migrate up to 35 of its current applications to the private cloud in the coming 12 months.
“With this project, we’re beginning to bring the IT infrastructure of the US Army into the 21st century,” said Lt. Gen. Robert Ferrell, US Army CIO in a statement. “Cloud computing is a game-changing architecture that provides improved performance with high efficiency, all in a secure environment.”
“Clients today are increasingly looking at the cloud as a pathway to innovation,” said Sam Gordy, IBM US federal general manager. “This IBM Cloud solution will provide the Army with greater flexibility and will go a long way toward mitigating, and, in some cases eliminating, the security challenges inherent with multiple ingress and egress points.”
IBM’s classification of impact level 5 (IL-5) from the Defense Information Systems Agency (DISA) enables this project to go ahead. The Army expects IBM to achieve DISA IL-6 certification in the coming year; the IL-6 is the highest possible level of security and enables IBM to work with classified information up to ‘secret’.
This move marks a significant change from just four years ago; back in April 2013, a report from the Inspector General of the United States Department of Defense argued that the Army’s previous CIO – Ferrell was nominated for the role in December of that year – was unaware of more than 14,000 mobile devices used throughout the organisation.
The deal will cost $62m ‘if the Army exercises all options’, IBM added.
Top @Docker Content of 2016 | @DevOpsSummit #DevOps #SDN #APM #AI
2016 has been an amazing year for Docker and the container industry. We had 3 major releases of Docker engine this year , and tremendous increase in usage. The community has been following along and contributing amazing Docker resources to help you learn and get hands-on experience. Here’s some of the top read and viewed content for the year. Of course releases are always really popular, particularly when they fit requests we had from the community.
Tune into the Cloud: Closer | @CloudExpo #IoT #M2M #Cloud #DigitalTransformation
During 2016 cloud computing celebrated its tenth anniversary. Therefore, at the beginning of 2017, we’ll briefly look at what the next decade may bring. Also this brings to a close the version of Tune into the Cloud that was published monthly in the Dutch print publication Cloudworks, the magazine where we started this series.
On your tenth birthday, as fresh teenager, you’re normally not yet an adolescent – a.k.a, a beginning adult – and you may not be seated at the adult table yet. But there are exceptions. Sometimes things just move faster. Just look at a Max Verstappen, our Dutch Formula 1 phenomenon who – by literally going faster – now starts standard from the front of the grid. And also the cloud has maneuvered itself in pole position for the 2017 race, with more and more “fans” who go for a ‘cloud first’, an ‘all-in cloud or even a “cloud-only” strategy.
Turn your DevOps volume up to 11 and seize the business opportunity
(c)iStock.com/gbrundin
“These go to 11.” One of the great lines from the classic mockumentary – This is Spinal Tap. Spoken by fictional rock guitarist Nigel Tufnel in the wonderful scene where he proudly shows off how his amplifier has a volume range of 0 to 11. Still hilarious after 30-odd years.
With DevOps and continuous delivery, organisations can also shoot for 11 – striving to take automation to its ultimate limit and then exceeding it. One hundred software deployments today, why not thousands tomorrow? Heck, let’s crank up the release volume to the max and push every code commit all the way to production.
But just being able to turn up the deployment dial is child’s play compared to what DevOps can deliver. True practitioners understand implicitly that volume counts for nothing if application quality or supportability suffers. Not content with velocity alone, expert practitioners ramp up all dials to 11 on the business amplifier – concurrently – including quality, resilience, supportability and compliance.
This takes some seriously effective collaboration supported by advanced cross-functional toolset integrations. Here are ‘11’ good ones to consider:
- Automatically convert user stories from agile planning into the smallest set of test cases which cover 100 percent of the functionality in the user stories, linked to the right data and expected results.
- Provide a real-time dashboard for managing and monitoring multi-application release content (user stories, features, bug fixes) through the entire release pipeline. This enables teams to gain visibility of release progress, more easily reconcile dependencies and map to business requirements.
- Automatically attach test data criteria to test cases produced with agile planning tools. As features get promoted from dev to test to pre-production this allows execution of test cases with data that already exists in the target environment.
- Integrate application performance management (APM) within continuous integration to check software builds against pass/fail conditions. Top solutions enable developers to invoke this quality check right from the tools they use (e.g. Jenkins dashboard) and be taken in context to APM data.
- Leverage service virtualization to generate realistic services and inject referentially correct data with fully integrated test data management. This significantly improves the efficiency and quality of testing, while reducing compliance risk.
- Speed the test bed preparation process by automating test data generation and reservation services as part of each deployment workflow.
- Automatically initiate test case processes and tie the results back into releases to determine go/no go for automated promotion, enabling faster, higher quality deployments.
- Provision virtual services and execute test suites on multiple virtual environments directly within a deployment workflow. By deploying into any testing environment, teams no longer have to wait for hardware environments to be built and become ready for testing.
- Automatically deploy and initiate application performance monitoring as part of a deployment workflow, with metric capture feeding back critical performance information before and after release promotion.
- Integrate API management with APM for in-depth visibility into problematic APIs that impact the customer experience and application performance. Capabilities should extend to proactive alerting on emerging issues and the ability to trace the interaction of APIs related to specific business transactions.
- Enable APM users to incorporate load test scenarios with key performance metrics into their business analysis. This supports faster detection of issues and more opportunities to positively improve application quality.
There will be many more toolchain integrations to consider, but these 11 are great ones to start with. They help ensure quality is injected into every release, while addressing key issues associated with maintaining compliance and improving application supportability.
Of note too, is how essential data, metrics and workflows are integrated seamlessly in the context of roles, functions and processes across the entire software lifecycle. This way the entire pipeline moves with purpose, while individual disciplines, be that development, testing or IT operations are never impaired.
So go ahead, crank up your DevOps practices to 11 and fully amplify the incredible digital business opportunity.