Five Ways to Invest in Privacy | @CloudExpo #Cloud

Public perception of privacy and security in the post-Snowden era has changed, leading to end users caring vastly more about the topic. Last year, there were more breaches than ever before; ad-tracking technology has grown and will keep growing, collecting more and more data; and awareness of government access to personal data has increased.
Although it is still difficult to fully understand the long-term consequences of data collection at this level, the concerns are rising both from a user and a collector point of view. End users, whether they are employees or customers, are requesting a higher level of respect towards their privacy and putting forward more questions as to how and why their personal data is handled.

read more

.@Racemi Provides Migration Packages to Upgrade Windows Server 2003 to Cloud | @CloudExpo #Cloud

To assist customers with legacy Windows Server 2003 that is no longer supported by Microsoft, Racemi has introduced fixed price packages for upgrading and migrating Windows Server 2003 servers to either Windows 2008 R2 or Windows 2012 R2 and the choice of Amazon Web Services (AWS) or SoftLayer cloud.
“We’re extending a lifeline by upgrading the legacy servers to more modern Windows Server platforms while taking advantage of cloud computing,” said James Strayer, vice president of product management, Racemi.
The initial step is a Pre-Upgrade Assessment Services package for $199 to identify Windows Server 2003 workloads, then determine if those are suitable candidates for upgrade and migration to the cloud, while taking into consideration how to minimize service disruption.

read more

Quantum Computing: From Theory to Reality By @TheEbizWizard | @CloudExpo #Cloud

The word quantum often portends New Age mumbo-jumbo, in spite of the fact that quantum mechanics underlies many of today’s most important technologies, including lasers and the semiconductors found in every computer chip.

Nevertheless, today quantum computing is becoming a reality. And while it may look to the layperson like mere mumbo-jumbo, in reality of the technology has largely moved out of the theoretical stage, as recent news indicates.
In fact, two important announcements over the last few weeks underscore the progress quantum computing is making.

read more

Collaborating in a Shared Service Management Environment By @NancyVElsacker | @CloudExpo #Cloud

The processes for IT, facilities and human resources (HR) are broadly similar and do overlap, such as with commencement and exit procedures, and can easily be brought together in a single tool to manage. However, even when doing so and when supporting departments have their own tools and processes, it is not always clear to end users where they should turn for support. For instance, in practice the management of mobile phones can be sourced to each of these departments, or a combination thereof. The collaboration between IT, facilities and HR, also called shared service management, cuts costs and improves the quality of service for end users.

read more

Hyper Converged Infrastructure, a Future Death Trap By @Felix_Xavier | @CloudExpo #Cloud

At the outset, Hyper convergence looks to be an attractive option seemingly providing lot of flexibility. In reality, it comes with so many limitation and curtail the flexibility to grow the hardware resources such as server, storage, etc independent of each other. In addition, performance nightmare bound to hit once the system gets loaded.
In late 1990s, storage and networking came out of compute for a reason. Both networking and storage need some specialized processing and it doesn’t make sense for all the general purpose servers doing this job. It is better handled by the specialized group of dedicated devices. The critical element in the complete data center infrastructure is data. It may be better to keep this data in the special devices with the required level of redundancy than spreading across the entire data centers. However, hyper convergence emerged for a noble cause of ease deployment for a very small scale branch office scenarios since it is always complex to setup and operate traditional SAN. The real problem starts when we attempt to replicate this layout into large scale environment with the transactional workload. Three predominant issues can hit the hyper converged deployments hard and it can spell a death trap. While sophisticated IT houses know these problems and stay away from the hyper convergence, but others can fall prey to this hype cycle.

read more

Announcing @MicronTech to Exhibit at @CloudExpo Silicon Valley | #Cloud

SYS-CON Events announced today that Micron Technology, Inc., a global leader in advanced semiconductor systems, will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA.
Micron’s broad portfolio of high-performance memory technologies – including DRAM, NAND and NOR Flash – is the basis for solid state drives, modules, multichip packages and other system solutions. Backed by more than 35 years of technology leadership, Micron’s memory solutions enable the world’s most innovative computing, consumer, enterprise storage, networking, mobile, embedded and automotive applications. Micron’s common stock is traded on the NASDAQ under the MU symbol.

read more

[session] Cloud Cost Tracking & Optimization By @ErnestMueller | @CloudExpo #Cloud

In his session at 17th Cloud Expo, Ernest Mueller, Product Manager at Idera, will explain the best practices and lessons learned for tracking and optimizing costs while delivering a cloud-hosted service. He will describe a DevOps approach where the applications and systems work together to track usage, model costs in a granular fashion, and make smart decisions at runtime to minimize costs.
The trickier parts covered include triggering off the right metrics; balancing resilience and redundancy vs cost, and balancing AWS purchase options like reserve instances and spot instances vs your application’s need for change.

read more

The Rising Growth of Hybrid Cloud By @Stratustician | @CloudExpo #Cloud

Now that we’ve had a few years of cloud adoption under our belts, I thought it was a good time to take a look at how some of the cloud models are performing. Public cloud has it’s own great case study with Amazon AWS, and private clouds see strong supporters with forward-thinking IT teams. But there is another model that is winning over IT teams, the hybrid cloud, and it has good reason to.
With the rise of cloud models, we’ve heard a lot about the benefits of public and private clouds. Public clouds gave us the ability to leverage low-cost services to help organizations transition to cloud models through availability of services such as Amazon AWS. Private clouds were either built in-house to start taking advantage of the same type of technologies that make public clouds so attractive, but sadly the scale of efficiencies often doesn’t work for small organizations because the upfront costs of purchasing hardware and licenses can be more than simply leveraging cloud services from a third-party provider.

read more

Google’s new autoscaling aims to offer instants gratification

Google cloud platformGoogle is to give users more detailed and tightly controlled management of their virtual machines through a new autoscaling feature.

Announced on Google’s own blog, the Google Compute Engine Autoscaler aims to help managers exert tighter control over all the billable components of their virtual machine infrastructure, such as processing power, memory and storage. The rationale is to give its customers tighter control of the costs of all the ‘instances’ (virtual machines) running on Google’s infrastructure and to ramp up resources more effectively when demand for computing power soars.

The new Google Compute Engine allows users to specify the machine properties of their instances, such as the amounts of CPUs and RAM, on the virtual machines running on its Linux and Windows Servers. Cloud computing systems that are subject to volatile workload variations will no longer be subject to escalating costs and performance ceilings as the platform brings greater scalability, Google promised.

“Our customers have a wide range of compute needs, from temporary batch processing to high-scale web workloads. Google Cloud Platform provides a resilient compute platform for workloads of all sizes enabling our customers with both scale out and scale up capabilities,” said a joint statement from Google Compute Engine Product Managers Jerzy Foryciarz and Scott Van Woudenberg.

Spiky traffic, caused by sudden popularity, flash sales or unexpected mood swings among customers, can overwhelm some managers with millions of requests per second. Autoscaler makes this complex process simpler, according to Google’s engineers.

Autoscaler will dynamically adjust the number of instances in response to load conditions and remove virtual machines from the cloud portfolio when they are a needless expense. Autoscaler will rise from nought to millions of requests per second in minutes without the need to pre-warm, Google said.

In another related announcement, Google is to make 32-core virtual machines (VMs) available. This offering is aimed at customers with industrial scale computing loads and storage-intensive projects, such as graphics rendering. Three variations of 32-core VMs are now on offer. The Standard offering has 32 virtual CPUs and 120 GB of memory. The High Memory option providers 32 virtual CPUs and 208 GB of memory, while the High-CPU offering provides 32 virtual CPUs and 28.8 GB of memory.

“During our beta trials, 32-core VMs have proven very popular with customers running many different workloads, including visual effects rendering, video transcoding, large MySQL and Postgres instances,” said the blog.

Cloud Hosting: Look Beyond Cost Savings By @Kevin_Jackson | @CloudExpo #Cloud

Is your company struggling with the idea of using “cloud hosting” in order to save money?
Truth be known, using cost savings as the primary reason for moving to cloud will almost guarantee failure. Some reasons that typically lead to cloud computing costing more include:
Building and migrating to a private cloud, which will almost always cost more than staying in a traditional data center;
Migrating legacy applications that weren’t designed to operate in a virtualized environment or are tied to very specific environments run on older operating systems or require out-of-date drivers;

read more