You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
What do you do?
Archivo mensual: noviembre 2017
[video] @Cloudistics’ Public Cloud Benefits | @CloudExpo #API #Serverless #CloudNative
You know you need the cloud, but you’re hesitant to simply dump everything at Amazon since you know that not all workloads are suitable for cloud. You know that you want the kind of ease of use and scalability that you get with public cloud, but your applications are architected in a way that makes the public cloud a non-starter. You’re looking at private cloud solutions based on hyperconverged infrastructure, but you’re concerned with the limits inherent in those technologies.
What do you do?
[video] Data Mobility: Embracing Hybrid Cloud | @CloudExpo @IBMSystems #Mobile #DX
Leading companies, from the Global Fortune 500 to the smallest companies, are adopting hybrid cloud as the path to business advantage. Hybrid cloud depends on cloud services and on-premises infrastructure working in unison. Successful implementations require new levels of data mobility, enabled by an automated and seamless flow across on-premises and cloud resources. In his general session at 21st Cloud Expo, Greg Tevis, an IBM Storage Software Technical Strategist and Customer Solution Architect, explored how storage and software-defined solutions from IBM have evolved for the road ahead. Learn how you can bring new levels of speed, agility and efficiency to the applications and workloads you choose to deploy across a hybrid cloud model.
macOS High Sierra Critical Update with Parallels Mac Management for Microsoft SCCM
Update: Apple® releases macOS® High Sierra security fix for critical root vulnerability for macOS High Sierra 10.13.1 (not impacted: macOS Sierra 10.12.6 and earlier). Make sure to install this update on all macOS computers affected, as described at support.apple.com/en-us/HT208315 As reported on CNET on November 28, 2017, a major bug has been uncovered that allows […]
The post macOS High Sierra Critical Update with Parallels Mac Management for Microsoft SCCM appeared first on Parallels Blog.
[video] Microservices: Choosing the Right Cloud, Services and Tools | @CloudExpo @IBMcloud #AI #Cloud #Microservices
We all know that end users experience the Internet primarily with mobile devices. From an app development perspective, we know that successfully responding to the needs of mobile customers depends on rapid DevOps – failing fast, in short, until the right solution evolves in your customers’ relationship to your business. Whether you’re decomposing an SOA monolith, or developing a new application cloud natively, it’s not a question of using microservices – not doing so will be a path to eventual business failure.
AWS takes a musical approach at re:Invent with machine learning, serverless and IoT key
More than 20 announcements flowed at AWS re:Invent today, including updates on machine learning, databases and the Internet of Things (IoT) – not to mention a couple of big-name customer wins.
The theme for Amazon Web Services CEO Andy Jassy was around builders. Unlike the previous year’s keynote, where it was about superpowers, this presentation had a slightly more mundane – and musical – approach.
Technology builders are like musicians, Jassy explained. Some guitarists may play acoustic a lot of the time, but require some parts be played electric. Requiring the appropriate tools for the job at hand, and getting the right approach to building your work, was key. And through the music – or rather words – of the Foo Fighters, Eric Clapton, Lauryn Hill and more, Jassy went through the gamut of AWS’ updates.
First up were the instances. AWS had earlier in the week announced bare metal offerings for EC2, and were augmenting that with new H1 Storage Optimised instances – designed for data-intensive workloads or big data clusters – as well as an updated M5 general purpose instances.
“Let’s talk about freedom,” says Jassy. Ah, one thinks: is the house band about to start up a certain George Michael number by any chance? Indeed they were. Jassy described freedom in this instance as “the ability not to be locked into abusive or ownerless relationships, or one size fits all characterisations or tools.” At this moment, veteran cloud-watchers could have already ticked off their Oracle-bashing square on their re:Invent bingo card, and AWS duly delivered (below).
This begat the launch around Aurora Multi-Master, which gives Aurora customers the ability for millisecond-quick failover on writes as well as reads, and a preview of Aurora Serverless, an on-demand auto-scaling serverless database. The latter’s announcement generated an unseemly outbreak of whooping from some sections of the audience. “What you get here is all the capabilities of Aurora…it doesn’t require you to provision any database instances, it automatically scales up when the database is busy, scales down when it’s not…pay only by the second when your database is being used,” said Jassy. “That is pretty different.”
Perhaps the most exciting announcements AWS made were around machine learning. Little wonder; the hype continues to rise – “of all the buzzwords we’ve heard in the 11-and-a-half years we’ve been doing AWS, machine learning might be the loudest”, Jassy noted – and AWS is even sleeping with the enemy in their collaboration with Microsoft in this area, Gluon.
What transpired was Amazon SageMaker, a new service aimed at helping developers build, train, and deploy machine learning models. “If you think about the history of AWS, we don’t build technology because we think the technology is cool,” said Jassy. “The only reason we build that technology is to solve problems for you. We want everyday developers and scientists to be able to use machine learning much more expansively.”
The most interesting aspect of SageMaker is its modular nature; users can build and train on it and deploy somewhere else, or vice versa. In a completely different ballpark is AWS DeepLens, a high definition camera with on-board compute optimised for deep learning. AWS said it expects users to get started running their first deep learning computer vision model in 10 minutes from unboxing the camera.
On top of this was Amazon Transcribe and Translate, automatic speech recognition and translation services. It will be interesting to see how the latter stacks up against Google Pixel Buds, the search giant’s recent effort in this area – and the wag who piped up on Twitter that Jassy’s keynote should have been the first test for Transcribe may have had a point – but it is another example of AWS’ machine learning initiatives.
The IoT side saw a trio of products, in the shape of IoT Device Management, IoT Analytics, and IoT Device Defender, the latter of most interest and offering continuous auditing, real-time detection and alerting, and more to secure millions of devices.
On the customer side, Expedia announced it was going all-in on AWS. Mark Okerstrom, president and CEO of Expedia, told attendees that he expects 80% of the company’s critical apps will move to AWS within three years. Elsewhere, The Walt Disney Company has selected AWS as its preferred public cloud infrastructure provider.
Picture credits: Amazon Web Services
[video] A Visual Platform with Cloud4U | @CloudExpo #API #Cloud #DevOps
«Cloud4U builds software services that help people build DevOps platforms for cloud-based software and using our platform people can draw a picture of the system, network, software,» explained Kihyeon Kim, CEO and Head of R&D at Cloud4U, in this SYS-CON.tv interview at 21st Cloud Expo, held Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
Small businesses ‘confident’ about cloud storage security – but is that confidence misplaced?
The verdict has come in for small businesses securing their clouds: good in parts, but could be better.
That’s according to the latest security survey from B2B research firm Clutch. In its latest report, which polled 300 US small business with a maximum of 500 employees, 90% of respondents overall said their cloud was secure, representing a small increase from the previous year.
More than half of organisations polled said they use encryption – cited by 60% of respondents – employee training (58%) or two factor authentication (53%) to secure their cloud storage, with only 6% saying they use no measures. Yet Clutch argues these numbers could be better; ‘almost all small businesses should be using these measures to protect their cloud storage’, the company said.
More concerning, however, is the revelation that 62% of small businesses who say they do not follow industry regulations when storing banking information on cloud storage. What’s more, 54% of small businesses admit the same for storing medical data on cloud storage. This, in other words, is the PCI DSS for payments and the HIPAA accreditation for healthcare.
Only 35% and 36% of respondents respectively said they followed the PCI DSS and HIPAA regulations. Almost three quarters (74%), however, said they followed the ISO accreditation. Istvan Lam, CEO of Tresorit, said it was “a good way to protect information inside the organisation… even if someone is not necessarily going to be audited for the standard, it’s a good practice to follow.”
The overall verdict from the report is that while small businesses are ‘confident’ in their cloud storage security, following industry regulations and implementing additional security measures should be seen as a must-have rather than a value-add.
“While cloud storage offers enormous benefits in cost savings, data portability and security, small businesses should ensure they implement proper security measures and follow necessary regulations to protect their data in the cloud,” the report wrote.
You can read the full report here.
Cloud Collaboration Reaches New Heights for Fujitsu and Citrix | @CloudExpo #VDI #Cloud #Virtualization
Long-term partners Fujitsu Limited and Citrix Systems Japan have announced a new virtual desktop service based in the cloud. Designed to take some of the pain out of digital transformation, the new offering makes it easier to create digital workspaces in a secure manner that’s scalable.
The Fujitsu Managed Infrastructure Service Virtual Desktop Service VCC (Virtual Client on Cloud) employs the Citrix suite of virtual desktop infrastructure products, including Citrix XenApp, Citrix XenDesktop, and Citrix ShareFile. The two companies have been working together in the VDI space for decades, but this new agreement enables Fujitsu to sell the new service.
6 Key Questions When Considering a DRaaS Solution
Questions When Considering a DRaaS Solution
I recently sat down with our very own Tim Ferris—Solutions Architect, Yankees fan (don’t hold that against him and DRaaS Guru. We talked about some of the common questions customers ask when considering DRaaS and the common themes Tim sees when helping customers plan and implement DRaaS solutions. Check out our conversation below about the key questions when considering a DRaaS solution.
- Does your company really need a DRaaS solution?
There are a variety of reasons why a DRaaS solution isn’t always the best fit for an organization. Depending on the type of business, an offsite disaster recovery strategy might not be a great match if the business is site dependent. Another factor is the cost consideration. Companies need to make the business determination if disaster recovery is something that’s strategic for the company to invest in vs. putting the cost into purchasing a powerful insurance policy.
- How are DRaaS solutions priced?
Traditionally a huge barrier to offsite DR adoption has been price, however, DRaaS makes DR and the supporting infrastructure much more affordable and attractive to companies. DRaaS cloud billing and pricing is still a challenge though because pricing models vary widely across different companies. This can be a huge point of contention and another reason to use a solution provider who can model out true cost comparisons and estimates across various cloud partners.
- Is your company ready to embrace a modern DRaaS strategy?
For many traditional IT organizations, the move to DRaaS can be intimidating since you’re moving your DR environment off-site to a third party. You may also be concerned about losing data stewardship and need to understand the differences that living on a shared infrastructure can pose. In addition, there are some applications that require physical dependencies and can’t be handled by virtual DRaaS, so evaluating your application portfolio is crucial. Finally, eliminating most of your capex cost and turning it into a monthly recurring cost can be valuable to many companies.
- How simple is it to implement a DRaaS solution?
There’s a lot of marketing hype around this idea that DRaaS solutions are very simple: “Buy our DRaaS solution and we’ll have you up in an hour!” While many providers can technically get the DRaaS framework up quickly, there are a lot of variables that are unique to each company. (See #5) Because DRaaS is not one-size-fits-all, many companies work with IT solution providers (like GreenPages) to help create and implement a DR migration plan and implementation strategy. Compounding the issue is that the DRaaS solution provider market is very crowded so it can be challenging to navigate the options—it’s important to choose based on your company’s specific requirements.
- What sorts of barriers or common problems will I encounter?
You must make sure as an organization that you have created a business impact analysis and overarching disaster requirements before someone can come in and implement the technical solution. Another prerequisite is understanding the interdependencies of all your applications so that you aren’t just replicating VMs, but are protecting business solutions and applications critical to the company. While ongoing management isn’t a barrier to DRaaS, the testing can be challenging no matter what DR solution you implement. (See #6).
- Can’t I just have a backup solution rather than a DRaaS solution?
Most companies do have a backup solution but not always a practical DR plan. Restoring from backup tape could take from days to weeks to restore. A true DRaaS system would provide you with recovery within minutes to hours. Backup is vitally important, but you may need the combination of backup with DRaaS to restore your systems properly as these systems can complement each other. Another important thing to keep in mind is that many companies do have a DR plan but have never tested it. Without testing, it’s not a plan, it’s just a theory. In addition, you will learn plenty of helpful and interesting information when you test your plan. Most important, you don’t want to learn that your DRaaS plan was faulty on the day you push the DR button due to an actual emergency.
Thanks for checking out our blog post! If you have any more questions about implementing DRaaS or would like to speak to a technologist, please reach out to us or click below.
By Jake Cryan, Digital Marketing Specialist