Building a cross-cloud operational model can be a daunting task. Per-cloud silos are not the answer, but neither is a fully generic abstraction plane that strips out capabilities unique to a particular provider. In his session at 20th Cloud Expo, Chris Wolf, VP & Chief Technology Officer, Global Field & Industry at VMware, will discuss how successful organizations approach cloud operations and management, with insights into where operations should be centralized and when it’s best to decentralize.
Archivo mensual: marzo 2017
2nd Watch Gets a New Round of Funding
2nd Watch, a managed cloud services provider, has raised $19 million in Series D funding. This round was led by a company called Delta-v Capital, with participation from Madrona Venture Group, Columbia Capital and Top Tier Capital Partners.
This Seattle-based company is likely to use this money to scale its cloud operations, add a managed cloud operations in the state of North Carolina, and hire people in the departments of sales, software, operations and client management. A good chunk of this money is expected to go towards servicing its East Coast clients by establishing a dedicated center in the city of Raleigh.
2nd Watch is a premier partner of AWS that provides many managed cloud services in the ecosystem of AWS. It was founded in 2010 with a clear plan of designing, building and managing public clouds in the areas of Big Data Analytics, digital marketing, cloud analytics and more. Specifically, this company helps clients to modernize their IT infrastructure and services in the cloud.
In 2010, it was one of the first companies to join the AWS partner network. It was among the first audited and approved AWS managed service partners, and it even has the distinction of being the only cloud native partner to earn SOC2 compliance with a perfect score.
Over the years, it has added many prestigious clients like Conde Nast, Coca-Cola, Lenovo, Motorola and Yamaha. Due to this rapid growth, it has been adding more people to its rolls, with February alone seeing an addition of 20 more people. Overall, there about 160 employees so far, and the company is planning to grow to 200 people by the second half of 2017, to meet the demands of its growing client base.
In terms of its cloud presence, 2nd Watch claims that it has 400 enterprise workloads under management and more than 200,000 instances in its managed cloud services.
The success of this company once again brings out the growing cloud market, and the many opportunities it presents for small and medium companies to carve a niche for themselves. There are hundreds of companies today that offer specialized services, thereby making the cloud a more attractive and feasible option for many clients around the world.
The success rate of this company has helped it to raise $56 million so far, and going forward, it is only expected to have more business and a larger client base than now. According to the CEO of 2nd Watch, Doug Schneider, the firm doubled its revenues in 2016, and much of this can be attributed to the growing interest of companies across different sectors to take to the cloud. Almost every company today understands the power of cloud, and are sooner or later, expected to move to it.
Schneider opined that to meet the growing needs of its current and future clients, it needs more investment. Considering the astronomical growth this company has seen over the last year, funding should never be issue, as long as the money is used towards the right channels that will further propel growth.
It’s sure going to be an exciting ride for 2nd Watch.
The post 2nd Watch Gets a New Round of Funding appeared first on Cloud News Daily.
AWS says human error to blame for S3 outage which took down multitude of sites
Earlier this week, Amazon Web Services’ (AWS)’ S3 (simple storage service) suffered an extended period of service disruption knocking a multitude of sites and businesses offline – and the fault was all down to good old fashioned human error, according to the company.
According to a note published to customers, the fault occurred during a debugging session. “At 9:37AM PST, an authorised S3 team member using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process,” the note reads. “Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended.”
As a result, this greater-than-expected removal prompted a full restart for US-EAST-1 region, which also meant that other AWS services, such as new instance launches of Amazon Elastic Compute Cloud (EC2), Elastic Block Store (EBS), and Lambda, were also affected.
The resulting casualty list was vast, including Quora, Slack, and Medium. Some users reported that their Internet of Things (IoT)-enabled services, such as connected lightbulbs and thermostats, had gone blank because they were connected to the Amazon backend, while AWS itself could not change its status dashboard, meaning green lights were erroneously blinking away while the chaos unfolded.
AWS, as one would expect in such a situation, said it would make several changes to ensure the issue does not happen again. The first step, which has already been carried out, was to change its capacity tool to create a slower process, as well as adding safeguards to prevent capacity being removed when it goes past the minimum required level. The company has also said it will change the admin console of its status dashboard to run across multiple regions and have less of a dependence on S3, adding that while the AWS Twitter feed tried to keep users updated, it understood the dashboard provided ‘important visibility’ to customers.
So what happens from here? Naturally, the resultant conversation and best practice was to not put ‘all your eggs in one cloud’, as Chuck Dubuque, VP of product and solution marketing at Tintri put it. “This is a wakeup call for those hosted on AWS and other providers to take a deeper look at how their infrastructure is set up and emphasises the need for redundancy,” said Shawn Moore, CTO at Solodev. “If nothing else, the S3 outages will cause some businesses to reconsider a diversified environment – that includes enterprise cloud – to reduce their risks,” Dubuque added.
“We want to apologise for the impact this event caused for our customers,” AWS added. “While we are proud of our long track record of availability with Amazon S3, we know how critical this service is to our customers, their applications and end users, and their businesses.
“We will do everything we can to learn from this event and use it to improve our availability even further.”
You can read the full statement here.
AWS says human error to blame for S3 outage which took down multitude of sites
Earlier this week, Amazon Web Services’ (AWS)’ S3 (simple storage service) suffered an extended period of service disruption knocking a multitude of sites and businesses offline – and the fault was all down to good old fashioned human error, according to the company.
According to a note published to customers, the fault occurred during a debugging session. “At 9:37AM PST, an authorised S3 team member using an established playbook executed a command which was intended to remove a small number of servers for one of the S3 subsystems that is used by the S3 billing process,” the note reads. “Unfortunately, one of the inputs to the command was entered incorrectly and a larger set of servers was removed than intended.”
As a result, this greater-than-expected removal prompted a full restart for US-EAST-1 region, which also meant that other AWS services, such as new instance launches of Amazon Elastic Compute Cloud (EC2), Elastic Block Store (EBS), and Lambda, were also affected.
The resulting casualty list was vast, including Quora, Slack, and Medium. Some users reported that their Internet of Things (IoT)-enabled services, such as connected lightbulbs and thermostats, had gone blank because they were connected to the Amazon backend, while AWS itself could not change its status dashboard, meaning green lights were erroneously blinking away while the chaos unfolded.
AWS, as one would expect in such a situation, said it would make several changes to ensure the issue does not happen again. The first step, which has already been carried out, was to change its capacity tool to create a slower process, as well as adding safeguards to prevent capacity being removed when it goes past the minimum required level. The company has also said it will change the admin console of its status dashboard to run across multiple regions and have less of a dependence on S3, adding that while the AWS Twitter feed tried to keep users updated, it understood the dashboard provided ‘important visibility’ to customers.
So what happens from here? Naturally, the resultant conversation and best practice was to not put ‘all your eggs in one cloud’, as Chuck Dubuque, VP of product and solution marketing at Tintri put it. “This is a wakeup call for those hosted on AWS and other providers to take a deeper look at how their infrastructure is set up and emphasises the need for redundancy,” said Shawn Moore, CTO at Solodev. “If nothing else, the S3 outages will cause some businesses to reconsider a diversified environment – that includes enterprise cloud – to reduce their risks,” Dubuque added.
“We want to apologise for the impact this event caused for our customers,” AWS added. “While we are proud of our long track record of availability with Amazon S3, we know how critical this service is to our customers, their applications and end users, and their businesses.
“We will do everything we can to learn from this event and use it to improve our availability even further.”
You can read the full statement here.
ChatOps Interface | @DevOpsSummit @VictorOps #DevOps #IoT #ChatOps
The modern software development landscape consists of best practices and tools that allow teams to deliver software in a near-continuous manner. By adopting a culture of automation, measurement and sharing, the time to ship code has been greatly reduced, allowing for shorter release cycles and quicker feedback from customers and users. Still, with all of these tools and methods, how can teams stay on top of what is taking place across their infrastructure and codebase? Hopping between services and command line interfaces creates context-switching that slows productivity, efficiency, and may lead to early burnout.
[session] #ChatOps for #DevOps | @DevOpsSummit @Addteq #AI #CD #SDN
ChatOps is an emerging topic that has led to the wide availability of integrations between group chat and various other tools/platforms. Currently, HipChat is an extremely powerful collaboration platform due to the various ChatOps integrations that are available. However, DevOps automation can involve orchestration and complex workflows. In his session at @DevOpsSummit at 20th Cloud Expo, Himanshu Chhetri, CTO at Addteq, will cover practical examples and use cases such as self-provisioning infrastructure/applications, self-remediation workflows, integrating monitoring and complimenting integrations between Atlassian tools and other top tools in the industry.
AI Track at @CloudExpo | #ArtificialIntelligence #BigData #AI #ML #DL #IoT
Artificial Intelligence has become a topic of intense interest throughout the cloud developer and enterprise IT communities.
Accordingly, attendees at the upcoming Cloud Expo | @ThingsExpo at the Javits Center in New York, June 6-8, will find fresh new content in a new track called Cognitive Computing | Artificial Intelligence, Machine Learning, Deep Learning. Cloud Expo is still accepting submissions for this new track, so please visit www.cloudcomputingexpo.com for the latest information.
20th Cloud Expo, taking place June 6-8, 2017, at the Javits Center in New York City, NY, will feature technical sessions from a rock star conference faculty and the leading industry players in the world. Cloud computing is now being embraced by a majority of enterprises of all sizes. Yesterday’s debate about public vs. private has transformed into the reality of hybrid cloud: a recent survey shows that 74% of enterprises have a hybrid cloud strategy.
Join @Interoute’s @Will_Morrish June 6-8 in NYC | @CloudExpo #AI #ML #IoT
«My role is working with customers, helping them go through this digital transformation. I spend a lot of time talking to banks, big industries, manufacturers working through how they are integrating and transforming their IT platforms and moving them forward,» explained William Morrish, General Manager Product Sales at Interoute, in this SYS-CON.tv interview at 18th Cloud Expo, held June 7-9, 2016, at the Javits Center in New York City, NY.
IBM and @Skytap #DevOps Education Track at @DevOpsSummit | #AI #SDN
Technology innovation is the driving force behind modern business and enterprises must respond by increasing the speed and efficiency of software delivery. The challenge is that existing enterprise applications are expensive to develop and difficult to modernize. This often results in what Gartner calls «Bimodal IT,» where business struggle to apply modern tools and practices to traditional monolithic applications. But these existing assets can be modernized and made more efficient without having to be completely overhauled. By leveraging methodologies like DevOps and agile, alongside emerging technologies like cloud-native services and containerization, traditional applications and teams can be more easily modernized without risking everything that depends on them. This session will describe how to apply lessons learned from modern app development, including the starting point to modernization that many enterprises are using to quickly improve speed, efficiency, and software quality.
[slides] Bimodal Digital Future | @CloudExpo @Interoute #DigitalTransformation
Traditional IT, great for stable systems of record, is struggling to cope with newer, agile systems of engagement requirements coming straight from the business. In his session at 18th Cloud Expo, William Morrish, General Manager of Product Sales at Interoute, outlined ways of exploiting new architectures to enable both systems and building them to support your existing platforms, with an eye for the future. Technologies such as Docker and the hyper-convergence of computing, networking and storage creates a platform for consolidation, migration and enabling digital transformation.