Using Amazon web services for elasticity and scale to build cloud applications.
Traditional IT environments that are built using physical servers can only scale and grow by buying new hardware and software and taking time to install and rack the hardware, configure the software and the application. If/when the excess capacity is not needed the servers stand idle consuming power, cooling and rackspace. This is inefficient and a waste of money.
Amazon Web Services (AWS) allows customers to scale using elastic demand. Just like a rubber elastic band stretches to accommodate more items , AWS provides elastic computing to allow a customer to scale up (or down); to grow or shrink their architecture quickly efficiently with minimal intervention.
The following is a summary of an AWS whitepaper and webinar that describes how you can use AWS to architect your system for the cloud.
Anybody who is considering a move to the Cloud knows that the greatest economic motivation for Cloud Computing is the pay-as-you-go, pay-for-what-you-need utility computing benefit, right? Deal with spikes in demand much more cost-effectively, the public Cloud service providers gush, since we can spread the load over many customers and pass the savings from our economies of scale on to you. The utility benefit is also a central premise of Private Clouds. Build a Private Cloud for your enterprise, the vendors promise, and you can achieve the same economies of scale as Public Clouds without all that risk.
Unfortunately, what sounds too good to be true usually is. There are a number of gotchas on both the Public and Private Cloud provider sides that limit—or even prevent—organizations from obtaining a full measure of the utility benefit. Let’s go back to economics class and take a closer look …
The focus of Java EE 7 is on the cloud, and specifically it aims to bring Platform-as-a-Service providers and application developers together so that portable applications can be deployed on any cloud infrastructure and reap all its benefits in terms of scalability, elasticity, multitenancy, etc. The existing specifications in the platform such as JPA, Servlets, EJB, and others will be updated to meet these requirements.
Java EE 7 continues the ease of development push that characterized prior releases by bringing further simplification to enterprise development. It also adds new, important APIs such as the REST client API in JAX-RS 2.0 and the long awaited Concurrency Utilities for Java EE API. Expression Language 3.0 and Java Message Service 2.0 will undergo an extreme makeover to align with the improvements in the Java language. There are plenty of improvements to several other components. Newer web standards like HTML 5 and Web Sockets will be embraced to build modern web applications.
In a world of ever-increasing demand for performance vs cost, the importance of optimizing infrastructure utilization is critical. With today’s easy access to extremely high-performance hardware and the overabundance of management software, it is easy to “over subscribe” either accidentally or intentionally. Plenty of these discussions center around RAM and CPU, but what about storage and bandwidth and who is going to manage it all? There is a gentle yet optimal balance between performance, redundancy and capacity.
In his session at the 10th International Cloud Expo, Mike Carpenter, VP Business Development at CARI.net, will discuss how CARI.net utilizes efficient and powerful hardware along with both open source and proprietary software to achieve that ideal balance.
The proliferation of device connectivity is redefining the functionality requirements and capabilities of many embedded systems as more and more of these devices look to leverage the “Cloud.” While many commercial software and hardware component vendors have begun to realign their value propositions to satisfy growing demand, commercial-off-the-shelf products (COTS) alone cannot meet every OEM’s needs. As a result, the Embedded Cloud has injected a new level of uncertainty and a new competitive dynamic within the embedded ecosystem.
In his session at the 10th International Cloud Expo, Chris Rommel, VP Embedded Practices at VDC Research Group, will discuss the key question: What companies or even types of companies (hardware, software, or third-party engineering vendors) will emerge as the primary providers of professional services to supplement the COTS components and enable this next level of system connectivity and functionality?
With Cloud Expo 2012 New York (10th Cloud Expo) now just eight weeks away, what better time to introduce you in greater detail to the distinguished individuals in our incredible Speaker Faculty for the technical and strategy sessions at the conference…
We have technical and strategy sessions for you every day from June 11 through June 14 dealing with every nook and cranny of Cloud Computing and Big Data, but what of those who are presenting? Who are they, where do they work, what else have they written and/or said about the Cloud that is transforming the world of Enterprise IT, side by side with the exploding use of enterprise Big Data – processed in the Cloud – to drive value for businesses…?
CIOs today have the opportunity to become cloud champions in their organizations, building innovative new IT models that drive new business opportunities. Whether your business is purchasing a single cloud application or driving a company-wide cloud strategy, it is essential to centralize, secure and manage the flow of information in and out of your firewall and to and from the cloud.
In his session at the 10th International Cloud Expo, Rick Nucci, founder and general manager of Dell Boomi, will outline why every successful cloud strategy must start with an integration strategy and how this can help CIOs take the reins in their cloud strategies.
Anyone who’s involved with web application performance – either measuring or addressing – knows there are literally hundreds of RFCs designed to improve the performance of TCP (which in turn, one hopes, improves the performance of HTTP-delivered applications that ultimately rely on TCP).
But what may not be known is that there are a number of variations on the TCP theme; slightly modified versions of the protocol that are designed to improve upon TCP under specific network conditions.
A Burton IT research note (G00218070), “Wireless Performance Issues and Solutions for Mobile Users” published in January 2012 goes into much more detail on these variations. As this is a WILS post, I will keep it short and sweet, and encourage you to read through the aforementioned research note for more details or visit the homepages / RFC details for each variation.
IT has already jumped on the SaaS bandwagon. They’ve realized that it is easier to use a SaaS application than to build and support a custom application in-house. As cloud computing becomes increasingly pervasive, businesses will move more and more of their infrastructure into a public or private cloud, initially looking to benefit from the economics and efficiency. As end users become more familiar with consuming infrastructure on-demand, the role of IT will change from being more project-oriented to being more service-oriented, essentially delivering IT-as-a-Service.
Historically, IT initiatives have been implemented on a per-project, integration basis – essentially, the antithesis of how cloud consumption models operate. How, then, would organizations rectify their business and IT objectives to ensure they remain on similar paths?
The answer is to take IT to the cloud as well.
According to a 2011 survey by the Independent Oracle User Group, over 50% of Oracle’s customers have deployed or are considering deploying private clouds. Most private clouds today support non-production workloads because enterprises are unable to deploy mission-critical applications in their private cloud.
In his session at the 10th International Cloud Expo, Anand Akela, Sr. Principal Product Director at Oracle, will discuss how the same Oracle technology that powers the Oracle Public Cloud enables you to deploy mission-critical applications in your cloud.