Siemens upgrades Digital Innovation Platform with Slack and Microsoft Teams integration


Clare Hopping

14 Jan, 2019

Siemens PLM Software has added support for Slack and Microsoft Teams to its Digital Innovation Platform through the Teamcenter Cloud Collaboration Hub.

The update means Slack and Microsoft Teams users are able to post messages and view their conversation history from their Active Workspace using Teamcenter.

Siemens PLM Software explained that it will enable products to be of a higher quality as engineers and other development teams can view each other’s work and provide feedback in real time.

It can be used across departments, with conversations that require cross-team to be decisions easily managed. For example, if engineering and the project management team need to communicate with the supply chain team, those conversations can be open to relevant parties, without cluttering other departments’ Workspace.

As well as allowing for more productive conversation streams with collaborators – whether employees, clients or partners, it also means that it will be much faster for teams to collaborate on projects, with instant communication between employees in the engineering world.

“Gaining real-time access to the information and leveraging collaborative efforts from distinct teams to drive insights from that information have become inevitably essential to design and innovative best-in-class products in today’s globally distributed manufacturing environment,” said Shubhang Tripathi, senior product marketing manager at Siemens PLM Software.

The Slack and Microsoft Teams integration is available across the Active Workspace environments, including in-browser and mobile, whether a smartphone or table, so employees can collaborate whether they’re at their desk or in the field.

As is the case with all Active Workspace integrations, conversations can be filtered and are only presented if they’re relevant to the user’s job role, making it simple to find updates and act upon them quickly.

Twilio appoints Google Hangouts innovator as chief product officer


Clare Hopping

14 Jan, 2019

Twilio has appointed Google Hangouts inventor Chee Chew as chief product officer, helping developers create more immersive experiences for customers.

Chew has more than 25 years of experience helping businesses such as Amazon, Hewlett-Packard, Google and Microsoft improve the ways customers engage with their brands.

For example, in his latest role as vice president of consumer engagement at Amazon, he helped improve its mobile shopping experience.

As vice president of engineering at Google, Chew was instrumental in the development of Google Voice, Google Hangouts Seattle/Kirkland’s Google Chrome, enabling customers to communicate more effectively and be more productive throughout their day.

“As an engineer by trade, I’m extremely passionate about helping Twilio continue to build APIs and SDKs that will empower developers,” said Chew. “I’m excited to join Twilio and help developers and innovative businesses around the world build contextual and intelligent communications that will reinvent how companies engage with their customers.”

Chew will use his experience working with developers to help Twilio focus on customer experience, enabling other businesses to offer enhanced contextual and intelligent communication to their clients.

He will sit on the executive leadership team and report to Twilio co-founder and CEO Jeff Lawson.

“Chee brings an incredibly unique combination of skills and experiences to Twilio – from running customer engagement for the world’s largest e-commerce company to leading the invention of Google Hangouts,” said Lawson.

“Chee’s leadership in building outstanding products, teams and companies will be a great addition to the Twilio executive team.”

Enterprise cloud journeys in 2019: Plain sailing or storms on the horizon?

A rising tide of regulatory, governance and security concerns, a paucity of talent experienced in large-scale transformation, and boardroom concerns over the real business impact can make cloud migration feel like a journey towards shark-infested waters.

Despite these challenges, 2019 will be the year in which the UK’s largest enterprises get serious about moving production workloads to the cloud and transforming their business. But will these journeys be plain sailing or will they be punctuated by periods of turbulence?

Here’s my long-range forecast.

Enterprise cloud spending will soar

Large enterprises will finally bite the bullet and fully mobilise cloud adoption to boost agility and compete more effectively against born in the cloud challengers.  As the incumbents gain more confidence with cloud technology, we will see transactional systems and key databases, as well as inventory, eCommerce, and operations apps all start to get the cloud treatment, leading to a bumper year for cloud vendors.

But enterprises will need to keep a close eye on costs to prevent them spiralling out of control. Cost optimisation will continue to be a focus as enterprises ensure their commercial structure and cost base are fully aligned ahead of a broader drive to the cloud.  Once there, they will need to ensure they have the right controls and mechanisms in place to ensure they don’t experience cloud shock when the bill arrives. 

Enterprises with immature cloud operating models will get bitten

Enterprises that made the leap to the cloud prematurely without putting in place the right operating model and a strategic architecture and plan are likely to experience runaway costs, security breaches and application availability issues.  Perhaps the Gartner Hype Cycle for Public Cloud is now nearing the ‘trough of disillusionment’.   Without the right people, process and technology in place, it is not uncommon for the cloud adoption journey to stall as organisations attempt to migrate more complex workloads and sensitive data.

A move to the cloud requires an enterprise-wide mindshift that is easy to underestimate. Our experience shows that technology accounts for only about 20% of a transformation effort, with cultural and operational change accounting for a whopping 80% or thereabouts. You can spend all the money in the world but until you persuade people to think differently and collaborate in new ways, that investment risks being wasted.  Of particular importance will be how organisations attract, train and retain talent to support their transformation ambitions; the use of cloud services partners will feature prominently through this journey. 

Finding talent will get tougher

Demand for data scientists, cloud architects and enterprise-capable cloud experts will increase against a backdrop of a restricted talent pool. This will limit the velocity of adoption and transformation to those organisations that can attract and retain talent. 

ML + AI will open up endless possibilities for digital transformation

Competition from fintech entrants, coupled with an increased customer expectation for a more personalised experience, will drive large enterprises to leverage more sophisticated analytics accessible via cloud-hosted AI & ML capabilities.   These incumbents, who have years’ worth of data, will look to consolidate, distil and leverage this data to leapfrog the competition with a more enhanced customer experience and through monetising the rich information which will be unlocked.   

Enterprise multi-cloud will become the norm

At the same time that cloud providers’ capabilities are converging, differentiators are emerging that make multi-cloud the de facto option for large enterprises. A multi-cloud strategy also mitigates the potential for vendor lock-in and helps address regulatory concerns around exit planning and concentration risk. Watch out, though, for the additional complexity that comes with managing multiple vendors from a commercial and operational standpoint. While there is a need to standardise cloud on-boarding, controls and the request process in a multi-cloud scenario, each provider will have a different stack and tools – having the right strategy in place will help ensure reduced friction in getting the best from this complex landscape.

In summary, 2019 will be a transformative year for enterprise cloud. Certainly the technology landscape is moving ever faster, making it a confusing time for buyers preparing to make the leap to boost agility and ultimately revenues. But the good news is that, with the appropriate levels of planning and a trusted team of transformation experts to guide you, the outlook is definitely cloudy.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Equinix and Alibaba Cloud focus on Asia Pacific in latest data centre launches

The race for cloud supremacy in the emerging Asia Pacific regions continues apace, with new data centre launches planned for Singapore and Indonesia by Equinix and Alibaba Cloud respectively.

Equinix announced that $85 million had been invested in a seven-storey site called SG4, which is expected to open its doors in the fourth quarter of 2019 with 1,400 cabinets available in the first instance. The site’s name reflects the fact that this will be Equinix’s fourth data centre in the city-state, with SG4 being built on the east side of the island, marking it out from the other sites.

The company trumpeted that the move will “provide interconnection and premium data centre services to help businesses with their IT transformation and cloud adoption initiatives, while also supporting the digital infrastructure of Singapore.”

Meanwhile, Alibaba announced the launch of a second data centre facility in Indonesia, bolstering its strength in the country. The move, which Alibaba claims was driven by ‘strong’ customer demand, will aim to give customers greater disaster recovery capabilities and critical switchover. The company said the Indonesian market with its “better connectivity and a fast-growing digital community” presented “enormous opportunities to both local and global enterprises.”

The assessment of the market across Asia Pacific has been varied. This is down to the fact the region itself has varying standards. Singapore would certainly be considered at the top end of the spectrum. The nation was ranked last year as the number one cloud-ready Asia Pacific nation according to the Asia Cloud Computing Association (ACCA). At the other end sits Indonesia, which was ranked #11 out of 14 nations, with international connectivity and sustainability garnering particularly poor marks.

In July, IDC warned in a research note that the vast majority of Asia Pacific organisations were at an early stage of their cloud journeys, with either ‘ad hoc’ or ‘opportunistic’ initiatives the order of the day. While Japan’s maturity is such that it is often excluded from this IDC analysis, the company argued that businesses needed more consistent, standardised, and available cloud resources.

The potential of Singapore however cannot be ignored. According to Nutanix’s Enterprise Cloud Index – and as reported by Computer Weekly –  companies in the city-state plan to reduce traditional data centre usage significantly in the coming two years.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

The Morlocks are here: Why computing’s new paradigm is the time machine

Opinion We had the desktop. Then came the cloud. And next, we’ll have the time machine.

The Time Machine is one of the classic 60s-era Sci Fi films. Based on the H.G. Wells novel, a Victorian-era inventor (Rod Taylor) gets propelled into the year 802,701 A.D. by a barber chair with a roulette wheel grafted onto it. In the distant future, he meets the Eloi, a future breed of humans that look like they just popped over from the country club. But just underneath the surface of the Earth dwell Morlocks, hairy, unkempt albinos who are nonetheless active, ambitious and clever enough to turn the Eloi into a free range food source. He escapes, explains everything to Mr. Ed’s best friend (Alan Young) and returns for his love interest, Weena.

Gorgeous and airy above. Dirty, but crafty and industrious, below. The same plot line propels Fritz Lang’s Metropolis.

What does that mean for computing? Computing architectures shift—and often relatively quickly–because of the ongoing tension between data and technology. Data grows exponentially. The world’s supply doubles every two years with the desire to consume it accelerating at around the same pace. Bandwidth, meanwhile, grows in a linear fashion. It doubles whenever Comcast feels like rolling out the backhoes. Computing architectures, thus, are in a never-ending race to close the gap between what we humans want to accomplish and what the infrastructure can deliver. The goal is not to win, but to just mask latency.

From 1948 to the 70s, centralised mainframes ruled because they were quicker than adding machines. In the 80s, desktops ruled because it eliminated computing queues and allowed people to get moderately complex jobs done on their own. Then came the browser and data centers the size of the Pentagon: the applications you wanted to run, and data you needed to access, far exceeded the capabilities of a laptop.

And then came the Fail Whale. Remember how Twitter used to regularly crash? That was the first of a growing number of signs that trying to manage everything from even the most elaborate and well-managed clouds and data centers wasn’t going to work. Edge data center providers like vXchange suddenly emerged to take off the strain of serving up viral videos and became the first wedge of a retreat from a High Castle future.

The Internet of Things and edge computing architectures will only exacerbate the trend. Take predictive maintenance, the gateway drug of IIoT. It will save billions a year in reduced downtime and repair costs. But how do you design a data system to accommodate the volume, variety and velocity of information with the very urgent, short-term needs of the users?

A wind turbine typically will have close to 650 parameters (hydraulic fluid levels, fluid temp…). Updates every ten minutes means 93,000. Or 34.1 million a year. Times the 44 turbines found in a mid-sized field that a wind developer will want to track. Or the hundreds in your entire portfolio. Or, if you’re a grid operator, the tens of thousands in your region? And then cross check that against current pricing, demand projections, projected repair costs and other parameters.

Sending all of this data to the cloud would be prohibitively expensive. Worse, it could delay getting it into the hands of the people—repair technicians, field managers—who will be the primary users. In this case, a computing in the cloud/data down below architecture. Send what’s necessary up to the cloud to build a model but keep the system of record down below.

Back to the Morlocks. We’ve got a situation where we will have absolutely asinine amounts of data. Huge portions of it may only be relevant for specific tasks or consumed by other, nearby computers.  You won’t want to throw it out—that would reduce the fidelity and accuracy of any findings. But you also don’t want to shuttle it too much between computers or replicate it too often—that would lead to a Chapter 11 reorganisation.

The solution. Don’t move it. Keep it where it was generated. Conduct as much work as locally and reserve the cloud for your most elegant applications, where scalability can really help. Compute in the clouds, keep data down below. It’s not centralised, nor is it distributed. And, like the food cycle of 802,701 symbiotic.

Some of this earthbound reality is reflected in recent analyst reports. IDC estimates that 40% of IoT data will be captured, processed and stored where it was generated. Gartner estimates the amount of data outside the cloud or enterprise data centrs will grow from 10% today to 55% by 2022.

Are there better metaphors? A work colleague refers to these new models as examples of dispersed computing. It’s catchy. Others have said we should expect to see archipelagos of computing.

Also, the Eloi weren’t smart. They aren’t the data scientists of the future, more like their attractive, college dropout children who plan to inherit. Maybe. But I can’t find the VCR and my access to old B movies is thus compromised.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

Microsoft Azure recovers from outage


Clare Hopping

11 Jan, 2019

Microsoft Azure’s UK South storage region suffered an outage yesterday, just a day after the company debuted its Azure Data Box Disk.

Just after lunchtime, customers started reporting their services were down. Some said their Azure accounts were unavailable, while others said they could only see a spinning wheel when trying to access the cloud service.

The problems began on Azure Storage, but spread to other services, including App and Virtual Machines, with the company’s status page showing a blanket outage for all services after the issue was first reported. Its Azure UK West storage had not been affected at the time of writing. 

“Starting at 13:19 UTC on 10 Jan 2019, a subset of customers leveraging Storage in UK South may experience service availability issues. In addition, resources with dependencies on Storage, may also experience downstream impact in the form of availability issues. Engineers have been engaged and are actively investigating. The next update will be provided in 60 minutes, or as events warrant,” the company’s service status page reported yesterday. An update later confirmed the issue continued until “approximately 05:30 UTC on 11 Jan 2019.”

The Azure Support team used Twitter to confirm that the issue had now been resolved, saying: “Mitigated: Engineers have confirmed that the Storage availability issue in UK South is resolved. Any customers experiencing residual impact will receive communications to their portal. A full Root Cause Analysis will be provided in approximately 72 hours.” 

However, some customers were unhappy that, while services were offline, the company had failed to communicate much since its original message as engineers scrambled to fix the issues.

The support team then followed up with another response two hours later, saying: “Hi there, as continue working to resolve this issue, we are wondering if you have seen any signs of recovery yet?”

In terms of what caused the outage, Microsoft said: “Engineers determined that a number of factors, initially related to a software error, caused several nodes on a single storage scale unit to become temporarily unreachable. This, along with the increase in load on the scale unit caused by the initial issue, resulted in impact to customers with Storage resources located on this scale unit.”

CloudEndure confirms acquisition by Amazon Web Services

It’s official: Amazon Web Services (AWS) has acquired Israeli cloud disaster recovery and backup specialists CloudEndure.

The news had been rumoured over the past few days, yet a short announcement from CloudEndure this morning confirmed the news. The company said no more other than that the acquisition “expands [its] ability to deliver innovative and flexible migration, disaster recovery, and backup solutions.”

CloudEndure offers disaster recovery, continuous backup and migration tools across AWS, Google Cloud Platform, Microsoft Azure, and VMware. Following the acquisition it is unclear as to how these paths will play out, although it is worth noting the CloudEndure website has been redesigned to reflect the news, with the ‘contact us’ form leading directly to a landing page for AWS’ Migration Acceleration Program.

The move can be seen as yet another step on the path to the next evolution of cloud. As AWS, Microsoft and Google have long since emerged victorious in the infrastructure space, the current battleground focuses on cloud management and migration.

Many of these companies are now being snapped up by the behemoths – CloudHealthTech being bought by VMware, Microsoft acquiring Cloudyn – and as Michael Liebow, global managing director at Accenture notes, there are many other niche solutions out there – and they’re all tempting acquisition targets.

“The fact is, companies that choose to build their own cloud management capabilities face a serious dilemma,” Liebow wrote. “A company that bets big on a capability, assuming it will be predictable or stable for some period of time, are likely wrong.

“The focus for most organisations should be on the level of innovation and new services coming from the cloud providers.”

CloudEndure, which was founded in 2012, had acquired $18.2 million across three funding rounds before acquisition. Its series B funding comprised of two rounds, in December 2015 and March 2016, raising $6m and $7m respectively. The company’s primary funder was Magma Venture Partners, while co-lead on its most recent funding was IT consultancy firm Infosys. Financials were not disclosed, although Israeli media posited the figure was around the $200m-$250m mark.

CloudTech has reached out for comment and will update this story in due course.

https://www.cybersecuritycloudexpo.com/wp-content/uploads/2018/09/cyber-security-world-series-1.pngInterested in hearing industry leaders discuss subjects like this and sharing their experiences and use-cases? Attend the Cyber Security & Cloud Expo World Series with upcoming events in Silicon Valley, London and Amsterdam to learn more.

AWS launches DocumentDB in a blow to open source


Keumars Afifi-Sabet

10 Jan, 2019

Amazon Web Services (AWS) has launched a managed document database service fully compatible with the widely-used open source software MongoDB.

Amazon DocumentDB, touted as a fast and scalable document database designed to be compatible with existing MongoDB apps and tools, is built from the ground up but is based on the technology used by the aforementioned $44 billion open source company.

The move is seen as a kick in the teeth for open source after MongoDB recently released a set of public licensing policies for third-party commercial use. These aimed to put a stop to large vendors exploiting the firm’s freely available technology.

AWS’ managed database will demonstrate high-performance levels and bring newfound scalability to managed databases, AWS chief evangelist Jeff Barr announced in a blog post, with capacity climbing from a base of 10GB up to 64TB, in 10GB increments.

“To meet developers’ needs, we looked at multiple different approaches to supporting MongoDB workloads,” said AWS vice president for non-relational databases Shawn Bice. “We concluded that the best way to improve the customer experience was to build a new purpose-built document database from the ground up, while supporting the same MongoDB APIs that our customers currently use and like.

“This effort took more than two years of development, and we’re excited to make this available to our customers today.”

AWS says its latest product offers users the capacity to built “performant, highly available applications that can quickly scale to multiple terabytes and hundreds of thousands of reads and writes-per-second”.

The firm added that customers have found using MongoDB inconvenient due to the complexities that came with setting up and managing MongoDB clusters at scale.

DocumentDB uses a purpose-built SSD-based storage layer, with a six-way replication across three availability zones. The storage layer is distributed and self-healing, giving it the qualities needed to run production-scale workloads, Barr added.

AWS’ newly-announced service will fully support MongoDB workloads on version 3.6, with customers also able to migrate their MongoDB datasets to DocumentDB, after which they’ll pay a fee for the capacity they use.

Amazon DocumentDB essentially implements the Apache 2.0 open source MongoDB 3.6 application programming interface (API) by emulating the responses that a MongoDB client would expect from a MongoDB server.

DocumentDB’s six-way storage replication will also ensure data can move from one system to another upon detecting a fault within 30 seconds. Meanwhile, it’ll give customers the option to encrypt their active data, snapshots, and replicas, with authentication enabled by default.

Version 3.6 of MongoDB is little under a year-and-a-half out of date, having been released in November 2017, with the latest release MongoDB 4.0.5, released in December, adding several new features and faster performance.

The two companies previously clashed in April 2017 when the AWS extended its Database Migration Service (DMS) to cover the migration of MongoDB NoSQL databases. At the time the DynamoDB only worked with AWS, where MongoDB’s own service retained compatibility with a plethora of cloud providers.

Darktrace Keynote On-Demand Presentation at @CloudEXPO NY | @Darktrace #Cloud #CIO #DataCenter #AI #ArtificialIntelligence

In an age of borderless networks, security for the cloud and security for the corporate network can no longer be separated. Security teams are now presented with the challenge of monitoring and controlling access to these cloud environments, as they represent yet another frontier for cyber-attacks. Complete visibility has never been more important-or more difficult. Powered by AI, Darktrace’s Enterprise Immune System technology is the only solution to offer real-time visibility and insight into all parts of a network, regardless of its configuration. By learning a ‘pattern of life’ for all networks, devices, and users, Darktrace can detect threats as they arise and autonomously respond in real time – all without impacting server performance.

read more

Emil Sayegh @CloudEXPO Presentation | @Hostway @ESayegh #Cloud #Infrastructure #CIO #Serverless #SDN #DataCenter

Public clouds dominate IT conversations but the next phase of cloud evolutions are “multi” hybrid cloud environments. The winners in the cloud services industry will be those organizations that understand how to leverage these technologies as complete service solutions for specific customer verticals. In turn, both business and IT actors throughout the enterprise will need to increase their engagement with multi-cloud deployments today while planning a technology strategy that will constitute a significant part of their IT budgets in the very near future. As IoT solutions are growing rapidly, as well as security challenges growing exponentially, without a doubt, the cloud world is about to change for the better. Again.

read more