Todas las entradas hechas por Lavanya

A Look into the Announcements at AWS’ re:Invent

It’s been the trend for every major company to make significant announcements during their annual conference, and AWS is no different. It’s annual conference called “re:Invent” took place this year from November 27 to December 1 at Las Vegas. Here’s a look into some of the major announcements made by the company over these five days.

Athena

Athena is a SQL-supported query language from Amazon, designed to manage its Simple Storage Service, popularly known as S3. This is an important tool as it makes it easy for users to use standard SQL queries on S3. The cost of this service is $5 per TB of data that is scanned each time.

Aurora

Aurora is another in-house service from Amazon. This cloud-based relational database is already compatible with many open source platforms such as MySQL. Soon, the popular PostgreSQL will also be supported by Athena, according to the announcement made at re:Invent. This announcement brought much applause from attendees.

AWS Batch

This new tool is likely to coordinate all the resources needed to run batch jobs on Amazon’s cloud. This tool will be ideal for institutions or projects that want to run large-scale computing tests that would entail the setting up of several virtual machines. With this tool, organizations won’t have to worry about the setup, as it will be handled by Batch.

CodeBuild

CodeBuild, as the name suggests, can automatically deploy and test code. This can be particularly important for developers who are iteratively building apps over AWS, as it can ensure that the changes are implemented as intended by developers. As you may have guessed, it is a new addition to the family of existing development tools, namely, Code Commit, Code Deploy, and Code Pipeline.

GPUs

Amazon introduced the idea of Elastic Graphical processing Units (GPUs), that can be added to any EC2 instance when needed. This is sure to give gaming enthusiasts and CAD customers much to rejoice. Currently, this product is in private preview, and more details about its pricing and availability will be disclosed over the next few months.

Greengrass

Amazon gave a big fillip to its IoT arm as well, with the announcement of AWS Greengrass. This product will provide the necessary software components needed to manage local computing and caching for connected devices. It can be particularly useful for IoT devices that have intermittent Internet connection. In addition, it will also help the device to upload data to the cloud, when Internet connection becomes available.

Glue

AWS Glue is business intelligence software that will assist advanced analytics on information stored in Amazon’s cloud. Primarily, it will identify the location of a particular data, extract it, convert it into a specified format and prepare it for analysis, and will move it to the platform where it will be analyzed.

Lex

Lex is a machine language product that will allow programmers to create interactive and conversational applications. In fact, this technology is at the heart of its Alexa platform, and is now available separately for developers to build applications on AWS cloud.

These announcements mark the beginning of exciting times for both AWS and its customers.

The post A Look into the Announcements at AWS’ re:Invent appeared first on Cloud News Daily.

CapitalOne Teams With AWS for its Tech Transition

Technology is already ubiquitous, and is only going to get more integral in the future.  Almost every company foresees these trends, and this is why many are taking major steps to embrace it. One such company that wants to infuse technology in a big way into its operations, with an aim to meet the growing demands of its digital customers, is CapitalOne. This financial services provider believes cloud is an important technology that needs to be adapted to move forward, grow, and to continue to reach out to more customers. To achieve this long-term goal, it has partnered with the leader in cloud, Amazon Web Services (AWS).

 

Under the terms of the agreement, AWS will be the major cloud partner for CapitalOne. Though the company already uses the services of companies like Google and Salesforce for its small applications, it has announced that AWS will handle all of its legacy migrations.

Some researchers believe that this would be a disadvantage for CapitalOne because it will not have the flexibility to switch between providers to tap into their pricing or features. Maybe CapitalOne already thought about this disadvantage, and this is why it has announced that AWS will be its major partner, and not an exclusive one! Still, much of its migration to the cloud is going to be handled by AWS.

This decision to move to the cloud comes as a surprise because financial companies, in general, are slower to embrace technology partly because of the security concerns that arise with it. CapitalOne, though, wants to change this trend. It wants to cater to its growing tech-savvy customers, and to bring out new tech-based products and services that are sure to impress them.

CapitalOne’s move to the cloud began in 2013 when it hired people to develop and test cloud-based applications in its innovation lab. The many experiments necessitated the use of cloud, and this led the company to tap into the services of providers at a small scale. By 2015, it became clear that the company has to make a big foray into the cloud to continue with the rapidly developing projects in its lab. In addition, the company understood that cloud offers many benefits in terms of scalability, flexibility, and better user experience for its customers. Due to these factors, the company has taken the big step to partner with AWS to move all its applications to the cloud. CapitalOne has many mainframe applications too, and all these are also likely to be moved to the public cloud soon. Though no timeframe has been mentioned by either companies, this transition is expected to begin at the earliest.

Besides migrating its existing applications, CapitalOne also plans to develop new products, especially for the mobile platform. Currently, its mobile app is one of the most used customer-facing applications, and this was transitioned to AWS cloud last month, on a trial basis. The success of this transition has prompted the company to move all its applications to the cloud.

Overall, this is a strategic move by CapitalOne, and it plans to use technology to score over its competitors. Also, it is expected to fulfill its customers’ expectations.

The post CapitalOne Teams With AWS for its Tech Transition appeared first on Cloud News Daily.

Plex is Expanding to Other Cloud Providers

Plex, the cloud media provider that helps customers to access files across a range of different devices, started with providing support only for AWS cloud platform. Now, this company is expanding to include other service providers as well, and this means, Plex Cloud users can also use Google Drive, Dropbox, and Microsoft’s OneDrive to store and access their files. Already, Plex cloud works across major game consoles, smart TVs, and other streaming devices.

Plex launched a beta program in September to give customers continuous access to their media from a wide range of devices. In this beta program, Plex chose AWS as its exclusive cloud provider. However, many users began to face issues with Amazon, and Plex was forced to work out the possible errors to make its beta program fully operational and free of errors. In the meantime, the company also decided to expand to other cloud providers as well.

This is a significant move for both the customers of Plex, and for the cloud market in general. The obvious advantage with this expansion is you can pay the same subscription fee, and access files stores across any of the above cloud storage companies. This way, as a user, you’re not solely restricted to subscribing to AWS when you want to access Plex’s media content.

As for the cloud market, it signifies changing competition. Even until a few months back, AWS was the dominant player, and the other players did not have a significant market share. All this is changing, as is evident from the many changes and deals that have happened over the last six months or so. Google has embarked on an aggressive strategy to increase its market share, and this is evident in the many product releases and acquisitions that have happened over the last few months. Likewise, other companies like IBM and Microsoft are also coming up with different strategies to woo customers and increase their market share. Due to this growing competition, companies like Plex want to expand their offerings to reach more customers. In all, this expansions reflect the growing might of other companies in the cloud space, and in some ways, also reflects the growing maturity of the cloud market.

Plex is a next-generation media provider that is looking to fully tap into the potential of cloud and networking to give customers uninterrupted access to their content from anywhere. This service offers multi-fold advantages for users. Firstly, gone are the days of “always-on” PCs. You are no longer confined to just your PC for accessing your media content, as you can now do it on any device. Secondly, there is no need to own or manage your home server, or even for that matter a storage device like NAS. Hence, you can store all your media on Amazon, Google, Microsoft, or Dropbox. Thirdly, you can have the same ease of access with Plex, as if the content was stored on your local storage.

With these features and advantages, it won’t be long before Plex becomes a major player in the connected media market.

The post Plex is Expanding to Other Cloud Providers appeared first on Cloud News Daily.

Juniper Networks Acquires AppFormix

Juniper Networks has announced that it will acquire a startup company called AppFormix at an undisclosed price. This acquisition is likely to give a big boost to Juniper’s cloud operations.

AppFormix is a cloud operations management and optimization company that was started in 2013 by Sumeet Singh, a former employee of Microsoft and Cisco. In fact, Singh started the Windows Azure team at Microsoft, and later quit the company, to pursue his own entrepreneurial interest. AppFormix works with any OpenStack or Kubernetes infrastructure to provide real-time insights into the health and operations of customers’ respective clouds. Built on big data analytics and machine learning, AppFormix  is redefining telemetry and cloud management with its features that include historic monitoring, visibility, and dynamic performance optimization for private, public, and hybrid cloud environments. As a result, this software has the power to create a self-driving infrastructure for every cloud.

Juniper Networks, headquartered in Sunnyvale California, was founded almost 20 years ago by Pradeep Sindhu. Today, it is a leader in routers, switches, security, and software, and is looking to establish a strong foothold in the cloud market as well. It already has a cloud product called Contrail that gives customers the platform to create, scale, and join together seamlessly different OpenStack clouds using the most secure and intelligent networks. The obvious advantage of these federated clouds is the reliability and flexibility that comes with combining different clouds, not to mention the benefits of speed, agility, and operational excellence to the organization.

Such a federated platform has helped Juniper to carve a niche for itself, and this is all set to get a boost with the acquisition of AppFormix. Specifically, the combination of Contrail and AppFormix will improve cloud security, accounting, planning, and implementation of cloud projects. As a result, customers will stand to gain a more secure, intelligent, and automatic operational excellence at a lot lesser cost. As for the company, such benefits are sure to translate to more happy customers.

Under the terms of the deal, AppFormix team will report directly to the CTO of Juniper Networks, Pradeep Sindhu. Also, AppFormix will continue to exist as a standalone brand, and will have the freedom to develop its own platform. No layoffs or changes are expected at this time, and both the companies believe that the formalities for closing the deal would be completed by the end of 2016.

This deal is another significant one not just for both the companies, but also for the cloud and networking industry at large, as analytics and machine learning can help companies to stay on top of their ever-growing networks. These tools are essential to keep pace with the unprecedented growth of networks. In this sense, the coming together of AppFormix and Contrail can give a lot to cheer.

Though current users may not see any changes right away, this acquisition is sure to augur well for Juniper’s customers in the future, as they can get the benefit of having an orchestrated monitoring system while connecting across different cloud environments.

The post Juniper Networks Acquires AppFormix appeared first on Cloud News Daily.

What is Snowball Edge?

Snowball Edge is a hardware released by Amazon Web Services that incorporates both computing and storage power to help customers run tasks and store data locally. This product was announced at AWS’ annual event called re:Invent that took place in Las Vegas last week.

Snowball Edge is an extension of its product called Snowball – a data transport product that helps to transfer large amounts of data to and from the AWS cloud. This product can store up to a petabyte of data, and is most helpful to transfer data through a secure connection. Currently, challenges faced in the transfer of data is the high network costs and security, so Snowball was created to address both these challenges. When you sign up for an AWS job, the Snowball device will be automatically shipped to you. Once it arrives, simply attach it to your network, and run the client to establish a secure connection. Then, you’re all set to download large amounts of data at almost one-fifth of the cost.

Now that you know about Snowball is, it’ll be easy to understand what its extension would do. This new extension expands the scope of Snowball with increased connectivity and higher storage. Also, Snowball Edge enhances horizontal scalability through clustering, and also provides new storage endpoints to connect S3 and NFS clients. It’s Lambda-powered local processing makes it a handy tool for heavy computational tasks. This product can now store about 100 TB of data, an upgrade from Snowball’s maximum capacity of 80 TB.

In addition, Snowball Edge comes with a unique design that can take any amount of wear and tear at home. Also, it can now be used in industrial, agricultural, and military environments, thereby increasing the scope of its usage. Such a hardy design also helps with rack mounting, especially when you want to make use of the clustering feature of this product.

In terms of connectivity, Edge offers many options. Data can be transferred to Edge through Cellular data or Wi-Fi from any IoT-enabled device. Also, there’s a PCIe expansion port for additional data transfer. You can also use it with a wide range of network options such as 10GBase-T, 10 or 25 GB SFB28, and 40 GB QSFB+. With such advanced option, you can transfer about 100 TB of data within just 19 hours! In other devices, the entire process can even take a week.

Further, you can use clustering to gain the benefits of horizontal scaling. For example, you can configure two or more Edge appliances into a cluster, so the entire setup can have greater durability and higher capacity. Such an option is sure to make it more convenient for you to store and handle large amounts of data.  Also, you can remove devices when you want to shrink the storage and computational power of your system. This scalability and flexibility is truly where Snowball Edge scores over other devices.

In all, yet another innovation product from AWS that is sure to bring computation and cloud storage a lot closer to people.

The post What is Snowball Edge? appeared first on Cloud News Daily.

Can Cloud Computing Aid NASA in its Search for Life?

NASA is constantly looking for ideas and technologies that will help in its quest for extraterrestrial life, and cloud can be one technology that has the capability to aid this search.

In fact, NASA understands the potential of cloud, and this is why its chief technology and innovation officer, Tom Soderstrom, visited the re:Invent conference hosted by Amazon Web Services. Already the two organizations have started working together on a number of projects, and their partnership is likely to extend to more projects in the future too.

Here’s a look at how cloud technologies play a crucial role in some of NASA’s projects.

Surface Water ocean Topography (SWOT)

The mission of this program is to make the first global survey of Earth’s water from space, to get a better understanding of oceans and Earth’s terrestrial water levels. This project involves both US and French oceanographers, and could provide much-needed answers to meet the water needs of an ever-growing population. This project generates about 100 TB of data a day, or about 100GB a second. It’s impossible for data centers to handle such large amounts of information, so NASA is taking the help of cloud providers to store and analyze this data.

NASA-ISRO SAR Mission (INSAR)

Like SWOT, this program also plans to record the impact of climate change, and even predict the occurrence of natural hazards with greater precision. INSAR is a joint program between NASA and India’s Indian Space Research Organization (ISRO). This project is already operational, and is expected to send large streams of data within the next few years. These data will also be stored and analyzed through cloud, as it’s too much for any single data center to handle.

Asteroid Redirect Mission

This is NASA’s first robotic mission to visit a large asteroid located near the Earth, collect a multi-ton boulder from its surface, and redirect it into an orbit closer to the moon. This massive project is expected to be completed  in 2020s, and is expected to provide data for a human mission to Mars in 2030s.

This project hasn’t started yet because NASA is looking for the perfect asteroid that is close to Earth, and one whose orbit can be redirected towards the moon. To identify this asteroid, NASA has to sift through tons of data and make complex calculations, and this is where cloud computing can help.

Europa lander

Europa is one of the moons on Jupiter that is completely covered with ice. NASA wants to explore the possibility of water below its frozen surface, and also even look for the presence of life here. The first step towards that goal is to select the right landing spot for a lander, so it can sample the surface and even possibly bring back some ice. Again, identifying this landing spot requires enormous calculations considering that Europa is the sixth-largest moon in our solar system.

In short, cloud can provide the storage and computing power that NASA needs to press ahead with its different programs.

The post Can Cloud Computing Aid NASA in its Search for Life? appeared first on Cloud News Daily.

American Airlines Turns to IBM for Cloud Tech

American Airlines, the ubiquitous airplane company in the US, has turned to IBM for cloud tech. Both the companies announced on early Tuesday morning that the airlines will use IBM’s cloud for some of its applications. Though neither company mentioned the complete list of applications that would be moved to IBM’s cloud, it is expected that American would move some of its legacy applications to the cloud.

The financial terms of the deal is not disclosed at this time, which is not that surprising considering that IBM is one of American’s cloud partners, and there is no exclusivity involved. So, American is free to strike a deal with other companies, and can even have a few cloud providers to host different applications, though there has been no mention of any other deal so far. Amazon is the market leader, though other providers like Google, Microsoft, IBM, and VMware are catching up.

This deal between IBM and American Airlines reflects the long-standing partnership of the two giants, that goes back decades. For example, when American Airlines wanted to introduce an online reservation system called SABRE, IBM was the one that developed and managed it for the airline company. This deep relationship is also partly why American chose IBM when it decided to move to the cloud. Going forward, both the companies are expected to further cement their partnership, and may even tap into IBM’s “Watson” – the artificial intelligence software that IBM is customizing to meet the needs of different clients.

A few months ago, American Airlines’ chief information officer, Maya Leibman, announced that the company would be embracing cloud soon to leverage the opportunities that come with it. This deal with IBM seems to be the first step towards achieving this goal. From this deal, it is clear that American has started on the process of modernizing its tech segment to keep in tune with the growing demands of its customers, who expect faster and a more reliable set of online tools. Also, its growing digital footprint necessitates a scalable infrastructure, that is best filled by a cloud architecture.

In fact, this airline is not alone in making this massive technological shift. Most companies world over are looking to move some or all of their operations to the cloud, with an aim to reap the benefits of such a move. As the volume of data grows, companies can choose to have their own data center or can store their data and applications in a service provider’s infrastructure. There are advantages and disadvantages in both the choices, so sometimes, companies prefer to store some data in their own data center and some in the cloud. American Airlines is also taking this hybrid approach as of now, as it plans to keep some applications on its own premises. But, that may change, depending on how this move to the cloud plays out for the company.

Thus, this is the first baby step taken by American Airlines to make a foray into cloud, and over time, the company may even move all of its applications to the cloud.

The post American Airlines Turns to IBM for Cloud Tech appeared first on Cloud News Daily.

Sydney is Alibaba’s New Datacenter Destination

Alibaba is making rapid strides in the global cloud market, with the opening of a new data center in the Australian city of Sydney. This is one of the four international destinations that Alibaba plans to open data centers over the next one year. The other three locations are not yet revealed by the company, though they are expected to be in Dubai, Germany, and Japan. These data centers are a part of the $1 billion investment that the company has allotted to expand its global footprint in the cloud market. The services that are likely to be offered in this data center include storage, analytics services, cloud security, and middleware for enterprises.

Besides announcing Sydney as one of the locations, Alibaba also said that it plans to expand the size of its team in Sydney and Australia to meet the growing demand for its business in Australia, and also to service this new data center. The company even hinted that it will open more such centers in Sydney and other parts of Australia, based on the success of this one. Alibaba’s strategy of team expansion and the choice of Sydney as one of the locations is a no-brainer, considering that China is Australia’s largest trading partner. Currently, China accounts for more than one-third of goods and services exported from Australia, and these numbers are expected to grow over the next five years, thanks to the historic China-Australia free trade Agreement that was signed in December 2015.

Alibaba has made a strategic move by choosing Sydney, as small and mid-size companies in Australia are always looking for ways to expand to the Chinese market. Also, it can provide a wide range of cloud products and services – more than any other Australian provider, partly because of its size and infrastructure.

Recently, the company opened a new office in Melbourne to help Australian cloud customers to increase their presence in China, and this data center is expected to compliment this service. As of now, Australian businesses can make the most of Alibaba’s cloud storage, data processing, middleware, and its payment portal Alipay to reach out to Chinese clients. This data enter will also give Australian businesses a chance to expand globally to other countries too, as they can now depend on a reliable and scalable infrastructure.

Established in 2009, Alibaba’s cloud business has made rapid strides in the cloud market. With more than 2.3 million subscribers, an annual turnover of more than $1 billion, and an annual growth rate of more than 130 percent, Alibaba surely is one of the fastest growing cloud companies in the world. Currently, Alibaba has 14 data centers located in mainland China, Hong Kong, Singapore, and the east and west coast of the USA. This company also processed the largest ever volume of online shopping in a single day, when it handled a record-breaking $17.7 billion in sales on China’s “Singles Day” that fell on November 11. With such a proven infrastructure, it’s no surprise that the company is willing to expand beyond Chinese shores.

The post Sydney is Alibaba’s New Datacenter Destination appeared first on Cloud News Daily.

What is Intercloud?

Intercloud, as the name suggests, is a network of clouds that are connected with each other in some form. This includes private, public, and hybrid clouds that come together to provide a seamless exchange of data, infrastructure, and computing capabilities. In many ways, it is similar to the Internet- the network of networks that power the world today.

This concept of Intercloud was started as a research project in 2008 at Cisco, and it was later taken over by the Institute of Electrical and Electronics Engineers (IEEE). It  is based on the idea that no single cloud can provide all the infrastructure and computing capability needed for the entire world. Also, if a cloud does not have a presence in a particular geographic region, but gets a request for storage or computation, it should still be in a position to fulfill it. In addition, if a particular cloud has reached its maximum capacity and yet if more resource is needed, the cloud should be able to borrow from another cloud seamlessly, so the user has no idea whether it is coming from a single cloud or from a combination of clouds. To achieve these objectives, Intercloud is seen as the best solution.

So, how does it work? Let’s look at a practical scenario. You’re traveling to a foreign country, and you use your cell phone to make a call. The call will go through and you’ll talk to the person you want, even if your service provider does not have a presence in the country in which you’re traveling. How is this possible? Cell phone providers enter into an agreement with providers of different countries to use their network for routing calls. In technical terms, this is called inter-carrier operability. As a result of this agreement, the call you make is routed through the partner’s network to help you talk to someone. The best part is you have no idea how your call is routed, and you don’t care about the technical aspects too, as long as you’re able to make the call you want. The same principle applies for Intercloud too.

When a cloud is saturated or gets a request from a new geographical region, it simply taps into its partners’ infrastructure to give the service you want. Here too, you’ll never know which cloud provider is servicing you, as long as you’re able to store or access what you want. In fact, such a convenience can help cloud providers to offer a more comprehensive service to its customers. Due to these reasons, more cloud providers are looking to enter into such strategic partnerships with other cloud providers who have a strong presence in local regions.

Currently, this technology is in its nascent stage, as it requires substantial efforts and advancements in technology to improve interoperability and sharing among network providers. The good news is that a lot of companies, and organizations such as IEEE have started working towards it, so we can expect the concept of Intercloud to become a reality soon.

The post What is Intercloud? appeared first on Cloud News Daily.

Cohesity Launches DataPlatform CE for AWS and Azure

Cohesity has recently launched a cloud extension of its existing product, and is called DataPlatform Cloud Edition, that can run on both Amazon Web Services and Windows Azure. This California-based startup company specializes in providing hyperconverged secondary storage for its clients located around the world.

This announcement is good news for companies that are dependent on cloud for their services, as well as the cloud market as a whole. Currently, the storage market is highly fragmented, so it is almost impossible for companies to have a seamless data portability from their on-premise location to the cloud. In most cases, it requires special software or gateways to move data to the cloud, not to mention the data format conversions that come with it. In fact, this entire process of migrating data to the cloud becomes cumbersome, and can lead to a lot of frustrations for companies that are looking to leverage the power of cloud.

To overcome this problem, Cohesity came up with an idea to provide native replication for all data, so it makes it easy to move from DataPlatform deployments located on-premises to DataPlatform CE deployments located on the cloud. The CloudReplicate feature of Cohesity’s product ensures that replications happen instantly from on-premises to remote cloud locations. In addition, the seamless integration with both AWS and Azure makes it convenient for companies to tap into the scalability and reduced cost that come with cloud computing and storage. Due to these options, more companies are expected to move their operations to cloud, thereby auguring well for not just Cohesity, but for the cloud market as a whole.

Another advantage of Cohesity’s DataPlatform Cloud Edition is that it consolidates backup, archive, and DevOps workloads into an efficient and scalable architecture that can run completely on the cloud. Also, customers can make the most of the existing Hadoop-based analytics resources available in DataPlatform Cloud Edition.

Further, this product makes transition between private and public cloud a lot easier, as it handles all bottlenecks including conversion of data between different formats. Since many companies prefer to use a hybrid environment for their data and applications, Cohesity’s DataPlatform can turn out to be a sought-after product in the near future. What’s more – it can be licensed through public cloud service providers like AWS and Azure too.

This edition is currently in preview mode, and can be accessed through an early access program. A complete version is expected to be rolled out during the first half of 2017.

Cohesity was founded in 2013 by Mohit Aron, who is also credited with co-founding another cloud company called Nutanix. With about 108 employees, and $70 million in venture funding from companies like Google Ventures and Sequoia Capital, this company has made rapid strides over the last three and a half years. Its first product, a web platform to consolidate and manage secondary storage and all its associated workflows, was launched in July 2015. Its flagship products are DataPlatform, that was launched in October 2015, and the recently announced DataPlatform Cloud Editions.

The post Cohesity Launches DataPlatform CE for AWS and Azure appeared first on Cloud News Daily.