Radical business transformation is an essential part of business growth, but that transformation is dependent more on process than it is technology – and on creating a culture of change within the business that has buy-in at every level. Technology vendors often tout the transformative potential of products, but when those initiatives fail to deliver? A company must take a deeper look at the reasons because it may not be the vendor or flaw in the technology, so much as the problem lies within the corporate culture.
Most change management challenges include new technological tools, and most of them have familiar IT acronyms – TMS and TBM and CRM and ERP – all of which need to perform in alignment with the reasonable expectations of the corporate decision-makers who chose them for their data advantages and automated process efficiencies. That investment is often sizeable, but unless a similar commitment is made in orienting the workforce to the goals that the systems are designed to achieve, the finance team will be wondering why they’re not seeing the ROI numbers while everyone else feels the frustration.
Archivo mensual: enero 2016
Cloud Provider Roadmap | @CloudExpo #Cloud
One of the major asset development programs for the CBPN is the Cloud Provider Roadmap.
Where our maturity model is intended for end-users to guide their adoption of Enterprise Cloud Computing, the Roadmap is intended for the service providers who deliver the capabilities described in the model.
The goal is a product roadmap plan for Cloud providers that is based on and utilizes common Cloud definitions (DRaaS, IDaaS etc.) as a framework for an industry-wide product management activity.
What Has NIST Done for Me Lately? By @Kevin_Jackson | @CloudExpo #Cloud
According to a study, 82 percent of federal IT professional respondents reported that they were using the NIST (National Institute of Standards and Technology) cybersecurity framework to improve their security stance. The survey also demonstrated that the document is being used as a stepping stone to a more secure government. When I first read this, my immediate reaction was a resounding, “So what!” These results tell me that US Federal Government agencies are using US Government guidance to do their US Government job. Isn’t that what you would expect? Making an impression on me would require a study across multiple industry verticals. If other industries were voluntarily using the NIST Framework, that would be saying something!
Toyota to build massive data centre and recruit partners to support smart car fleet
Car maker Toyota is to build a massive new IT infrastructure and data centre to support all the intelligence to be broadcast its future range of smart cars. It is also looking for third party partners to develop supporting services for its new fleet of connected vehicles.
The smart car maker unveiled its plans for a connected vehicle framework at the 2016 Consumer Electronics Show (CES) in Las Vegas.
A new data centre will be constructed and dedicated to collecting information from new Data Communication Modules (DCM), which are to be installed on the frameworks of all new vehicles. The Toyota Big Data Center (TBDC) – to be stationed in Toyota’s Smart Center – will analyse everything sent by the DCMs and ‘deploy services’ in response. As part of the connected car regime, Toyota cars could automatically summon the emergency services in response to all accidents, with calls being triggered by the release of an airbag. The airbag-induced emergency notification system will come as a standard feature, according to Toyota.
The new data comms modules will appear as a feature in 2017 for Toyota models in the US market only, but it will roll out the service into other markets later, as part of a plan to build a global DCM architecture by 2019. A global rollout out is impossible until devices are standardised across the globe, it said.
Toyota said it is to invite third party developers to create services that will use the comms modules. It has already partnered with UIEvolution, which is building apps to provide vehicle data to Toyota-authorised third-party service providers.
Elsewhere at CES, Nvidia unveiled artificial-intelligence technology that will let cars sense the environment and decide their best course. NVIDIA CEO Jen-Hsun Huang promised that the DRIVE PX 2 will have ten times the performance of the first model. The new version will use an automotive supercomputing platform with 8 teraflops of processing power that can process 24 trillion deep learning operations a second.
Volvo said that next year it lease out 100 XC90 luxury sports utility vehicles that will use DRIVE PX 2 technology to drive autonomously around Volvo’s hometown of Gothenburg. “The rear-view mirror is history,” said Huang.
Pitfalls of Microsoft O365 Migrations Part 3: Mobile Devices & Help Desk
Here is the 3rd and final part of my video series around common pitfalls of Microsoft O365 migrations (you can watch part 1 here and part 2 here). In this final installment, I dive into the mobile side of Microsoft O365 as well as how your help desk factors in.
Pitfalls of Microsoft Office 365 Migrations Part 3
Or watch the video here.
Interested in learning more about Microsoft O365 Migrations? Download David’s recent webinar, “Microsoft Office 365: Expectations vs. Reality“
By David Barter, Practice Manager, Microsoft Technologies
Paradigm4 puts oncology in the cloud with Onco-SciDB
Boston-based cloud database specialist Paradigm4 has launched a new system designed to speed up the process of cancer research among biopharmaceutical companies.
The new Onco-SciDB (oncology scientific database) features a graphical user interface designed for exploring data from The Cancer Genome Atlas (TCGA) and other relevant public.
The Onco application runs on top of the Paradigm4’s SciDB database management system devised for analysing multi-dimensional data in the cloud. The management system was built by database pioneer Michael Stonebraker in order to use the cloud for massively parallel processing and offering an elastic supply of computing resources.
A cloud-based database system gives research departments cost control and the capacity to ramp up production when needed, according to Paradigm4 CEO Marilyn Matz. “The result is that research teams spend less time curating and accessing data and more time on interactive exploration,” she said.
Currently, the bioinformatics industry lacks the requisite analytical tools and user interfaces to deal with the growing mass of molecular, image, functional, and clinical data, according to Matz. By simplifying the day-to-day challenge of working with multiple lines of evidence, Paradigm4 claims that SciDB supports clinical guidance for programmes like precision anti-cancer chemotherapy drug treatment. By making massively parallel processing possible in the cloud, it claims, it can provide sufficient affordable computing power for budget-constrained research institutes to trawl through petabytes of information and create hypotheses over the various sources of molecular, clinical and image data.
Database management system SciDB serves as the foundation for the 1000 Human Genomes Project and is used by bio-tech companies such as Novartis, Complete Genomics, Agios and Lincoln Labs. A custom version of Onco-SciDB has been beta tested at cancer research institute Foundation Medicine.
Industry veteran Stonebraker, the original creator of the Ingres and Postgres systems in 1985 that formed the basis of IBM’s Informix and EMC’s Greenplum, won the Association for Computing Machinery’s Turing Award and $1million from Google for his pioneering of database design.
Why the cloud does not just “work”
(c)iStock.com/4774344sean
Everyone knows that migrating workloads to the cloud is challenging. But many assume that after you get to the cloud, all you have to worry about is maintaining your applications.
After all, you have outsourced infrastructure management to AWS, right? No more racking and stacking servers, no more switches and hypervisors.
While AWS will maintain the physical infrastructure that supports your virtual environment, it is not initially set-up to help you configure those virtual instances and get them ready to run your code. And it remains your responsibility to maintain that architecture so that it evolves when your applications change.
This is a potentially dangerous misconception – and one that we run into often.
If you only do the bare minimum in cloud management, or leave it to your (new) DevOps teams or your systems engineers (who are already busy helping with future projects and migrations) to figure out, it often results in chaotic cloud environments. Every instance has a “custom” configuration, a developer launched an environment and forgot to spin it down, changes are made but not documented, and wasted resources accumulate. Suddenly you are not sure which resources belong to which project, and when your site goes down, your team is left combing through dozens of resources in the AWS console.
AWS is a world-class engine equipped with a robust set of tools, but it is not a car you can just drive off the lot. Even the best cloud systems require ongoing maintenance. And while moving to the cloud means you have “outsourced” racking and stacking servers, your IT team still needs to do things like configure networks, maintain permissions, lock down critical data, set up backups, create and maintain machine images, and dozens of other tasks AWS does not perform.
You may have heard of enterprises that now use a team of engineers to run their clouds. These enterprises realise that beyond the initial cloud migration, significant time and effort needs to be invested in automating configuration, auto scaling, and deployment. Automation allows these teams to spin up environments in seconds and repair failures without human effort. AWS provides the tools that make automation possible, but it is not configured on day one. That team of engineers may never touch the AWS console; they are maintaining automation scripts and modules, not the AWS resources themselves.
Over the last few years, the word “automation” has been used to describe many things. That is likely because every enterprise cloud environment will use different tools for different goals, and the end result will likely be partial automation – not every organisation needs to deploy 200 times a day or wants to become the next Netflix, after all. But the great thing about automation is that it is not an all-or-nothing proposition. Even a little effort yields immediate results.
The cloud does not work out of the box, nor does it maintain itself. But with the right team and the right skill set – in tools like Puppet, Jenkins, AWS CloudFormation – it can change your IT department forever.
The post The Cloud Does Not Just “Work” appeared first on Gathering Clouds.
Qingteng Funding
Chinese based enterprise security start-up Qingteng Cloud Security has recently gained its first round funding of 60 million Yen from CBC Capital and Redpoint Ventures, setting a record among cloud security start-ups in China for amount of first round funding. Prior to this round of funding, Qingteng received 6.5 million Yen from investments from ZhenFund, Cloud Angel Fund, and Fenghou Capital.
The founder of Qingteng, Zhang Fu, has commented on the severity of the current Internet enterprise security situation. Security operations occur when problems arise, instead of establishing a system of consistent security maintenance. The most efficient way, according to Zhang, to react to external attacks utilizing internal resources is to give security teams tools that allow security operations staff to divert attention from emergency situations, as the tools will take care of them, and instead focus on regular security management.
The security platform was created to be able to adapt to various infrastructures and situations. It may automatically process security issues, construct a safe model for enterprises, analyze internal and external abnormalities that could lead to security issues, and detect as well as block hacking activities. According to Zhang, this security platform has already reached enterprises in many business sectors, including healthcare and finance.
About Qingteng
Qingteng was founded in August of 2014 and is dedicated to offering an adaptive cloud security program that can protect data on various systems and enterprises.
The post Qingteng Funding appeared first on Cloud News Daily.
Encrypting Parallels Desktop Virtual Machines
Guest blog by Dhruba Jyoti Das, Parallels Support Team Do you think encryption guarantees effective security to your data? While the word “guarantee” might be a bit strong, encryption is a very effective method to restrict your data from unauthorized access. What’s encryption? Encryption is the process of encoding information in such a way that […]
The post Encrypting Parallels Desktop Virtual Machines appeared first on Parallels Blog.
Cloud computing in 2015 – and what is on the horizon for 2016
(c)iStock.com/Barcin
As we wave goodbye to 2015 and say hello to 2016, it has again been an interesting, mostly successful year for cloud. According to research from Synergy, public cloud generates over $20 billion (£12.9bn) in quarterly revenues for IT firms. With the usual mix of mergers and acquisitions, security scares, and a few surprises, here are the highlights from 2015.
2015 review
February: Box files for IPO, enjoys initial uplift. The on-again, off-again story of enterprise cloud storage provider Box going public was finally resolved in February, with the company raising $175 million in its IPO and mostly confounding the critics. Tien Tzuo, CEO of Zuora and a long time advocate of Box’s business model, wrote in this publication that the storage firm’s opportunities were “limitless.”
Read more: Box raises $175m with IPO, but what is next?
July: Microsoft shuts down Windows Server 2003, companies try to cling on. July 14 2015 was a date etched firmly into scores of businesses and IT execs worldwide. The shutdown of WS2003 was supposed to mark a watershed in corporate IT – but naturally, businesses reluctant to change had other ideas. Research from the Cloud Industry Forum showed almost three quarters of firms with more than 200 employees were still on the server despite support having expired.
Read more: Windows Server 2003 end of life upgrade: Why many companies are leaving it “quite late”
August: Wuala announces it is to shut down its service. Wuala, the Switzerland based cloud storage service owned by Seagate, announced in August customers would have until November 15 to transfer their data before the service disappeared. The company offered up Tresorit as a potential new home. Even though Tresorit chief exec Istvan Lam welcomed Wuala customers, he added: “You can’t build your business on storage”.
Read more: Wuala cloud storage to shut down, offers Tresorit as potential new home
October: Dell coughs up $67bn for EMC. In the biggest tech deal in history, Dell parted with an eye-watering $67 billion (£43.7bn) to acquire EMC and create what was described by the companies as “the world’s largest privately-controlled, integrated technology company.” VMware would remain an independent, publicly traded company. Some analysts strongly supported the deal, but others were less convinced.
Read more: Dell agrees $67bn EMC deal: An “enterprise powerhouse” or “the walking dead”?
November: Microsoft turns off the tap on unlimited OneDrive. As it transpired, unlimited doesn’t really mean unlimited, as Microsoft said it was no longer offering limitless OneDrive storage. The reason: a minority of users who were just too greedy for their own good, backing up numerous PCs and storing entire movie collections on the service. Brian Taptich, CEO at developer-focused storage provider Bitcasa, told this publication that while it was “definitely not a failure” on Microsoft’s part, but added that competing against Microsoft, Google, Amazon et al on their own terms was akin to a “suicide mission.”
Read more: Microsoft turns off unlimited OneDrive for Office 365, blames greedy users
2016 predictions
Unified communications will become the “true connective tissue” in organisations: That’s the view of Mike Nessler, executive vice president at InterCall, who argues: “We’ll see organisations identifying new ways to drive UC value and ROI by giving IT managers and end users metrics and data to improve their communications.” The key for UC is also around getting tools in the right hands, says Nessler. “2016 will be a year for reassessment as CIOs not only take a look at what UC software they bought, but how end users are putting it to use.”
Amazon to ride out more competition in storage: Michael Tso, CEO and co-founder of Cloudian, argues co-location and data centre providers could provide stiffer competition to Amazon in 2016 as a best bet for cloud storage. “To differentiate themselves, they will start to offer more high-end targeted solutions for a new part of the market, for example disaster recovery, continuity, and regulatory compliance services,” he explains, adding: “As a result of this Amazon growth, S3 will become the de facto storage interface for all new applications and will replace CIFS/NFS as the access protocol for files and content storage.”
2016 will be the year of cloud security and cloud ROI: This is the view of Yorgen Edholm, CEO and president of Accellion. “Businesses are finally recognising the ROI gains of using a public cloud for non-critical content storage don’t have to be discarded in order to retain the security [and] compliance benefits offered by on-premise and private solutions,” he said.