The word quantum often portends New Age mumbo-jumbo, in spite of the fact that quantum mechanics underlies many of today’s most important technologies, including lasers and the semiconductors found in every computer chip.
Nevertheless, today quantum computing is becoming a reality. And while it may look to the layperson like mere mumbo-jumbo, in reality of the technology has largely moved out of the theoretical stage, as recent news indicates.
In fact, two important announcements over the last few weeks underscore the progress quantum computing is making.
The processes for IT, facilities and human resources (HR) are broadly similar and do overlap, such as with commencement and exit procedures, and can easily be brought together in a single tool to manage. However, even when doing so and when supporting departments have their own tools and processes, it is not always clear to end users where they should turn for support. For instance, in practice the management of mobile phones can be sourced to each of these departments, or a combination thereof. The collaboration between IT, facilities and HR, also called shared service management, cuts costs and improves the quality of service for end users.
At the outset, Hyper convergence looks to be an attractive option seemingly providing lot of flexibility. In reality, it comes with so many limitation and curtail the flexibility to grow the hardware resources such as server, storage, etc independent of each other. In addition, performance nightmare bound to hit once the system gets loaded.
In late 1990s, storage and networking came out of compute for a reason. Both networking and storage need some specialized processing and it doesn’t make sense for all the general purpose servers doing this job. It is better handled by the specialized group of dedicated devices. The critical element in the complete data center infrastructure is data. It may be better to keep this data in the special devices with the required level of redundancy than spreading across the entire data centers. However, hyper convergence emerged for a noble cause of ease deployment for a very small scale branch office scenarios since it is always complex to setup and operate traditional SAN. The real problem starts when we attempt to replicate this layout into large scale environment with the transactional workload. Three predominant issues can hit the hyper converged deployments hard and it can spell a death trap. While sophisticated IT houses know these problems and stay away from the hyper convergence, but others can fall prey to this hype cycle.
SYS-CON Events announced today that Micron Technology, Inc., a global leader in advanced semiconductor systems, will exhibit at the 17th International Cloud Expo®, which will take place on November 3–5, 2015, at the Santa Clara Convention Center in Santa Clara, CA.
Micron’s broad portfolio of high-performance memory technologies – including DRAM, NAND and NOR Flash – is the basis for solid state drives, modules, multichip packages and other system solutions. Backed by more than 35 years of technology leadership, Micron’s memory solutions enable the world’s most innovative computing, consumer, enterprise storage, networking, mobile, embedded and automotive applications. Micron’s common stock is traded on the NASDAQ under the MU symbol.
In his session at 17th Cloud Expo, Ernest Mueller, Product Manager at Idera, will explain the best practices and lessons learned for tracking and optimizing costs while delivering a cloud-hosted service. He will describe a DevOps approach where the applications and systems work together to track usage, model costs in a granular fashion, and make smart decisions at runtime to minimize costs.
The trickier parts covered include triggering off the right metrics; balancing resilience and redundancy vs cost, and balancing AWS purchase options like reserve instances and spot instances vs your application’s need for change.
Now that we’ve had a few years of cloud adoption under our belts, I thought it was a good time to take a look at how some of the cloud models are performing. Public cloud has it’s own great case study with Amazon AWS, and private clouds see strong supporters with forward-thinking IT teams. But there is another model that is winning over IT teams, the hybrid cloud, and it has good reason to.
With the rise of cloud models, we’ve heard a lot about the benefits of public and private clouds. Public clouds gave us the ability to leverage low-cost services to help organizations transition to cloud models through availability of services such as Amazon AWS. Private clouds were either built in-house to start taking advantage of the same type of technologies that make public clouds so attractive, but sadly the scale of efficiencies often doesn’t work for small organizations because the upfront costs of purchasing hardware and licenses can be more than simply leveraging cloud services from a third-party provider.
Is your company struggling with the idea of using “cloud hosting” in order to save money?
Truth be known, using cost savings as the primary reason for moving to cloud will almost guarantee failure. Some reasons that typically lead to cloud computing costing more include:
Building and migrating to a private cloud, which will almost always cost more than staying in a traditional data center;
Migrating legacy applications that weren’t designed to operate in a virtualized environment or are tied to very specific environments run on older operating systems or require out-of-date drivers;
Transitioning from the freedom of summer to the structure of back to work and school can be tough for all of us. Yet, September is a time of renewal, a time to refocus on our goals and remember that, sometimes, making big changes begins with small ones. The key is not to overwhelm ourselves, but to keep moving forward, often with small steps at a time.
We know that the big work of migrating hundreds of thousands of machines running Windows Server 2003 — including 175 million websites or one fifth of the internet, according to recent numbers provided by Internet services firm Netcraft — still lies ahead.
Disaster Recovery isn’t a new concept for IT folks. We’ve been backing up data for years to offsite locations, and used in-house data duplication in order to prevent the risks of losing data stores. But now that cloud adoption has increased, there have been some shifts in how traditional Disaster Recovery is being handled.
First, we’re seeing increased adoption of cloud-based backup and disaster recovery. Gartner stated that between 2012 and 2016, one third of organizations are going to be looking at new solutions to replace current ones particularly because of cost, complexity or capability. These new solutions not just address data, but the applications themselves, and are paving way for Disaster Recovery as a Services (DRaaS). Unfortunately, there is still some confusion as to when cloud services may suffice for disaster recovery, or if looking at full-fledged DRaaS makes more sense for organizations. Let’s explore four of the key considerations when it comes to DRaaS and cloud backup services.
While the Beatles may not have been too concerned about cloud uptime or network security when they wrote their famous ballad, they had the right idea about turning to friends for a helping hand. It’s a lesson CIOs should take to heart.
After all, the IT department – including network, systems, applications and database administrators – is often the eyes, ears and hands of the CIO, providing front-line information and making recommendations on the latest technology and IT methods to ensure that business is running at optimum efficiency and achieving competitive advantage through technology.