Todas las entradas hechas por Lavanya

Maryland is all set to move its services to the cloud

Maryland is leading the way in moving its services to the cloud, as it has started building a cloud-based platform to upgrade its customer processes, specifically its human services technology infrastructure.

Maryland is thinking of creating a unique product called Total Human Services Information Network, dubbed as, MD THINK. This is a cloud-based repository that will offer integrated data to all the programs and projects handled by the departments of Human Services, Licensing, Regulation, Juvenile Services, Health and Mental Hygiene.

The obvious advantage with such a system is that users can get the information they want from a single repository. In the past, data was siloed in different departments and this made it difficult for one department to access the information contained in another. As a result, they were not able to make the best decisions because no department ever had the complete picture of an individual. Since everything about an individual was not known, there was no scope for contextual decisions, and this really impeded the success rate of many of its programs, not to mention the hardships it caused to the residents of Maryland.

To overcome this problem, MD THINK came into being. Gov. Larry Hogan and his administration are striving to make this project a reality by investing $14 million from the allocated budget funds for the financial year 2017.

According to experts, MD THINK will use a scalable and cloud-based platform that will streamline all the processes and consolidate it into a single repository for data storage and retrieval. The first phase will focus on children and families, so that social workers can better cater to their needs.

In addition, case workers will be given a tablet device that they can use while on field work, as a part of MD THINK. Such a move will help case workers to record information right away and even lookup for  any pertinent information that can help them offer a better service.

There are many advantages that come with such a program. First off, Maryland is likely to save a ton of time and money, as it can streamline its resources and use them in a more productive manner. More importantly, it will be able to gather data analytics that can provide deep insights into the program beneficiaries, how it benefits them and what more can be done to improve the quality of life of Maryland citizens.

According to Gov. Hogan, this project will transform the way the state of Maryland deliver human services to its residents and it’ll also finally bring the process of delivering government services into the 21st century.

This cloud-based project, in many ways, reflects the power of technology and how it can be used to make our society a better place for living. Though the benefits of cloud are well-understood, not all government services are ready to move to it. This can be due to a combination of many factors such as existing legacy systems, the cost of migrating existing data and systems to a new system, budget constraints, mindset of policymakers and more.

Hopefully, this step by Maryland paves the way for other states to also follow suit and embrace cloud in a big way.

The post Maryland is all set to move its services to the cloud appeared first on Cloud News Daily.

SUSE is HPE’s Main Linux Provider

SUSE is a major Linux provider, and recently, it has entered into an agreement with HPE to tap into each other’s assets.

Under the terms of this partnership, SUSE will acquire HPE’s cloud assets such as the HPE OpenStack and HPE Stackato. Using these assets, SUSE plans to expand its own OpenStack Infrastructure-as-a-Service (IaaS) solution, that in turn, will accelerate its entry into the Cloud Foundry Platform-as-a-Service (PaaS) market.

In a release made by the company, the OpenStacks assets from HPE will be integrated into its own OpenStack cloud to help SUSE bring to the market a certified enterprise-ready solution for its clients and customers who use the SUSE eco-system.

Besides acquiring these assets,  HPE has also named SUSE as its preferred opensource partner for Linux, OpenStack and Cloud Foundry solutions.

While it may sound like SUSE is a major beneficiary of this partnership, in reality, it’s a win-win situation for both the companies. Under this partnership, HPE will use SUSE’s OpenStack Cloud and Cloud Foundry solutions as the foundation for its popular Helion Stackato and Helion OpenStack solution. This company believes that by partnering with SUSE, it can provide the best in class PaaS solutions that are simple to deploy in the multi-cloud environments of its customers.

From the above terms, it’s clear that both HPE and SUSE will hire programmers and do the cloud development work together, but HPE will sell and deploy these services, in addition to providing support for it. Of course, this is not an exclusive partnership as SUSE is always open to finding other partners too in the future.

Also, both the companies have a non-exclusive agreement under which HPE has a right to use SUSE’s OpenStack IaaS and SUSE’s Cloud Foundry PaaS technology for its own development in its Stackato and OpenStack platforms.

This agreement represents the long and complex relationship that these two companies have. A few years ago, HPE merged its non-core software assets with a company called Micro Focus, that owns SUSE. Secondly, SUSE has always worked with HPE on the Linux side. This additional partnership will further cement the relationship between these two companies in the long run.

So, how is this beneficial to everyone involved?

For SUSE, these partnerships describe its evolution from a Linux provider to a full-fledged cloud software development company. It’s in a better position to take on competition from companies like Red Hat and Canonical. In this sense, this partnership can signal its strong entry into the cloud, and unlike the other two companies, it has partnered with a strong and top computer partner in HPE.

As for HPE, this joint development efforts can greatly cut down the time and resources needed to create applications. In today’s competitive market, getting products to the market as quickly as possible is the key, and with its efforts shared with SUSE, this can greatly help HPE to speed up its development process.

Due to the enormous benefits for both these companies and for the cloud industry as a whole, this partnership can be a significant one for everyone involved.

The post SUSE is HPE’s Main Linux Provider appeared first on Cloud News Daily.

Google Offers More Discounts for Customers

Google has once again lowered the cost of its offerings to make it a more attractive option, when compared to Azure and AWS. Now, customers can get a whopping 57 percent discount when they commit to buying a certain amount of CPU cores and memory. However, the catch is customers should commit to a one-year or three-year contract to get this discount.

This offering from Google is similar to the Convertible Reserved Offering from AWS, that allows users to reserve a certain compute instance in the cloud, so that they can change the instance type they want in the future, but can still retain the cost and space. So far though, one of the major areas that differentiates Google from other companies is that clients don’t have to commit to a particular instance size. Rather, they can pick and choose their configuration that best meets their needs at any given point in time.

While retaining this option, Google is also enticing its customers to commit to a particular size, probably in a bid to take on Amazon. Overall, this is a smart move by Google, as it tries to take on the two leading companies –  Microsoft and Amazon, in the cloud market. With this high discount, it is effectively getting new and existing customers to commit to it for a long term, so it can be assured that the business will not go to that of its competitors.

Though this is not a new strategy, it is nevertheless a successful one. Cell phone companies are known to entice customers with free or lower-priced smartphone offers for a commitment for three years or more, and they have been successful with it. In fact, such contracts have become a norm in today’s cell phone industry. By applying the same strategy, Google hopes that it’ll be successful for its cloud business too.

This price cut is a part of the cloud wars between AWS, Microsoft and Google, that we’ve been seeing over the last few years now. In fact, it’s become common for these three companies to cut prices as a response to the other’s strategy. Regardless of this war, it’s the customer who ultimately benefits from this competition because they can get better service at lower cost.

Other than this price cut, Google will continue to offer discounts for the sustained use of its platform. Currently, it offers a 24 percent discount off the list price of a particular machine, and this will continue. Also, Google is planning to cut the price of its virtual machines by eight percent for those using the Japanese virtual servers, and five percent for its US customers.

In addition to this strategy, Google also wants to expand its facilities, so it’s in a better position to service the needs of its customers. To this end, it announced that its launching three data centers – one each in California, Canada and the Netherlands. With this, the total number of facilities will go up to 17 locations.

Let’s see how much Google can tighten this race.

The post Google Offers More Discounts for Customers appeared first on Cloud News Daily.

eBay is Planning its Own Public Cloud-Based Platform

eBay, the world’s largest marketplace, has launched its own public cloud-based platform that will run parallel to its existing on-premise system. This is the first step taken by the company to fully migrate its platform from an on-premise one to the cloud. The company claims that it made this transition within a period of six months, which is no ordinary feat, considering that at any given moment, eBay has at least one billion listings spread across the 200 countries in which it operates.

If you’re wondering why it’s transitioned to the cloud, the answer is easy. Cloud offers a ton of benefits for businesses, and this is partly the reason for almost every major company in the world to have some or all major applications in the world. eBay wanted to follow suit, and tap into the many advantages that comes with cloud.

The next obvious question is, why it chose to build a public cloud-based platform from scratch instead of choosing one of the prominent platforms like AWS and Azure? Well, there are many reasons for it.

First off, eBay wants to cater to a growing digital customer base, as new generations of tech-savvy buyers and sellers are emerging. With its own platform, it will have the flexibility to customize and add-on all the features it wants.

Secondly, eBay will have a high level of control if it owns the cloud, when compared to storing all the data in a system owned by another company. Let’s say, it stores all its important applications in AWS. What happens if Amazon decides to start its own business similar to eBay? Technically speaking, this shouldn’t affect eBay’s operations, but it can still create doubts in the minds of its customers. Moreover, when eBay retains the cloud, it has greater visibility into what’s going on inside its operations.

The third factor is cost. In general, cloud is cheaper than investing in capital-based items like servers and data centers. However, this rule applies only when your business is small and you use only a small amount of space on the cloud. Remember, most cloud providers use a pay-as-you-go model, which means, when your usage level increases, you’re going to have to pay more.

As your business grows, there will come a point when the subscription fee you pay towards a cloud service will be more than your capital expenditure. At that time, it’ll be too late to turn back. To avoid getting into such a situation, eBay decided to build its own platform. This is a sensible strategy given that eBay already has more than one billion listings, and this is only likely to grow over the next few years.

Lastly, the preferences of buyers and sellers are changing rapidly, so eBay wants to be in a position to react well to these dynamic changes. This requires a complete control over its operations, including its data management, and this is another compelling reason for eBay to launch its own cloud platform.

Overall, this is a great decision by the company, and hopefully it works well in the coming years.

The post eBay is Planning its Own Public Cloud-Based Platform appeared first on Cloud News Daily.

The World’s First Commercial Quantum Computing Service is Here

Quantum computing is the technology that allows you to develop computational systems based on quantum theory. Broadly speaking, this theory taps into the power of atoms to perform memory and processing-related tasks. The obvious advantage with this technology is its speed, as it can perform calculations significantly faster than any device we know today.

To be more specific, quantum computing will tap into the power of the properties of subatomic physics, where small bits of information called quantum bits or qubits possess the property to change into multiple states simultaneously. This way, bits don’t have to be just 0 or 1, as in classical computing. Rather, they can take on any value, and this flexibility can open a world of possibilities for computing.

While the above is a theoretical explanation, the practical side of it has been more challenging. Getting enough qubits to work together to run any algorithm is challenging, to say the least. To address this issue, two major systems have helped. The first one traps individual ions in a vacuum using magnetic or electrical fields while the other sends qubits to microscopic superconducting circuits.

IBM has been one of the pioneers in the area of quantum computing and it has relied heavily on the second approach to build its systems. Recently, it has announced that the world’s first quantum computing service will be available later this year. Called as IBM Q, this service will be accessible over the Internet for a fee.

So, what can we expect from this system? A super-computer that will outperform all the existing computers today? Well, not really.

What we can expect is a system that will play a crucial role in developing other quantum machines that can perform complex tasks, especially those that have been impossible with our current computing technologies.

This system builds around the knowledge and research around IBM’s cloud computing eco-system called Quantum Experience, that anyone can now access for free. This system has been available for public use since May 2016, and it got an upgraded interface recently. Currently, this system allows thousands of researchers worldwide to build the quantum algorithms they want, without having to build their own quantum computer.

So far, IBM has not been forthcoming about the details of IBM Q. It has not given a specific release date, and hasn’t mentioned about how powerful this system would be, or how much it will cost to access it. The only information we have in this regard is that it has lined up its first set of clients, though it hasn’t even specified the exact names.

Though we don’t have much information in this regard, the fact that the first quantum computing system is going to be available, is a big news by itself. It can pave the way for future developments in this industry, and can even propel it to great heights in the near future. In many ways, this is an exciting development simply because quantum computing can do things that were believed to be undoable earlier.

The post The World’s First Commercial Quantum Computing Service is Here appeared first on Cloud News Daily.

IBM and Salesforce come together

IBM and Salesforce have come together for a significant partnership that can change the face of cloud and artificial intelligence (AI). The two giants of tech world have entered into an agreement that will integrate both their artificial intelligence platforms, i.e., Watson and Einstein.

Besides their AI platforms, both the companies also plan to align some of their software services and components. In addition, IBM has offered to deploy Salesforce Service Cloud in its internal system, as a sign of goodwill of an expected lasting partnership. Specifically, both the companies plan to tap into each other’s machine learning capabilities to deliver more in-depth knowledge about customers to their end-clients. Furthermore, Watson’s API will be introduced to Salesforce CRM-enhancing AI platform, thereby allowing Einstein to make the most of IBM’s work in the area of cognitive computing.

This agreement also includes adding IBM’s Weather Company assets to Salesforce’s app development platform called Lightning. From this, both the companies plan to make weather data a potent tool for predicting customer behavior, and even for driving many of customer preferences. Also, by the end of March, IBM’s Application Integration Suite will be able to provide data from different third-party sources to Salesforce CRM.

This partnership is obviously significant for both the companies, as it can help IBM to turn around its fortunes and at the same time, help Salesforce to meet its ambitious growth plans. But more than the companies, it can have a big positive impact on the industry as a whole.

Imagine what happens when two of the world’s best AI systems come together? It can create the next level of applications, provide the deepest possible insights, and do just about anything else that the tech industry wants. Also, with IoT and the Wearable industry blooming, this move can completely alter the fortunes of companies engaged in both these areas. The existing clients of both the companies though will be the greatest beneficiaries as they can get deeper insights into their customers’ behavior, that can in turn, help them to devise better strategies to boost their sales and revenue.

Both the companies estimate that 2017 will be the year when AI will hit the world on a large scale, and they want to be in a position to drive this industry. Also, IBM expects almost a billion people to be touched by Watson in one way or another, and with Salesforce also joining hands, the possibilities are endless. The CEOs of IBM  and Salesforce are positive about this partnership and even believe that this is the beginning of a long and exciting journey together.

If you’re wondering why these two companies, it’s because they’ve been associated with each other for years. Though IBM is based in Armonk, New York and Salesforce in San Francisco, California, both the companies have worked together. In March 2016, IBM acquired Bluewolf, a product considered to be one of the oldest implemntation of Salesforce. Since then, many interactions have happened between the two companies, and this of course, is the big step that can take their partnership to new heights.

The post IBM and Salesforce come together appeared first on Cloud News Daily.

GE and Siemens to tap into cloud for Manufacturing

Cloud is all-pervasive, and we see its presence in almost every sector today. The many benefits that come from it make it a potent technology for even traditional sectors like manufacturing. Industrial titans like GE and Siemens are vying to make the most out of cloud to boost their manufacturing processes and products.

One of the core things they are working on is the Internet of Things (IoT). To refresh on what this means, IoT is a technology that connects everyday devices like your watches, alarm clocks, refrigerators and more to create a complete digital system. Such a system would connect different things to create a smooth data flow, that in turn can make life much easier for its users.

For example, your refrigerator can constantly monitor the level of milk, and if falls below a particular threshold, the system can order milk for through your smartphone app. So, the milk will arrive at home without any effort from your end. That’s the power of technology, and IoT in specific. Since this is an evolving space, there’s a lot of opportunities here, and this is exactly what GE and Siemens wan to tap into.

While some smaller companies are working on specific products, the aim of these big players is to reinvent the manufacturing process as a whole, so individual firms can tap into each stage of the value chain, starting from design to production and maintenance. In other words, both these companies want to create a cloud-based IoT system that will form the backbone of industrial automation, and will provide vast amount of data about everything –  ranging from parts and inventories to the performance of different products.

To achieve such a smart backbone, GE and Siemens are looking to create built-in sensors and protocols that will enable communication between different industrial equipment such as pumps, drones, robots and more. The key aspect is the sensors that will monitor the systems and will send detailed data to the companies that own the system, and using this, they can learn about the health of machines, their performance and more. Along with sensors, platforms are the key to enable communication between different devices.

According to research firm, Markets&Markets, this market could be worth $150 billion within a short span of three years. They key for the success of this market depends to a large extend on the platform that is being used for the data flow. Microsoft has been an early leader in this aspect as it has entered into agreements with both GE and Siemens to use its Azure cloud platform.

Besides Azure, Siemens has officially announced six partnerships and has said that hundreds more are in the pipeline, and all this ensures that industrial automation is a reality soon. GE, too has a lead, as its platform is compatible with most other cloud platforms. Already dozens of companies are building their applications on GE’s platform.

In addition, both these companies are working on their automation by acquiring digital companies that operate in this sphere. Overall, it’s going to be a tight and interesting race that is sure to benefit everyone in the long-run.

The post GE and Siemens to tap into cloud for Manufacturing appeared first on Cloud News Daily.

2nd Watch Gets a New Round of Funding

2nd Watch, a managed cloud services provider, has raised $19 million in Series D funding. This round was led by a company called Delta-v Capital, with participation from Madrona Venture Group, Columbia Capital and Top Tier Capital Partners.

This Seattle-based company is likely to use this money to scale its cloud operations, add a managed cloud operations in the state of North Carolina, and hire people in the departments of sales, software, operations and client management. A good chunk of this money is expected to go towards servicing its East Coast clients by establishing a dedicated center in the city of Raleigh.

2nd Watch is a premier partner of AWS that provides many managed cloud services in the ecosystem of AWS. It was founded in 2010 with a clear plan of designing, building and managing public clouds in the areas of Big Data Analytics, digital marketing, cloud analytics and more. Specifically, this company helps clients to modernize their IT infrastructure and services in the cloud.

In 2010, it was one of the first companies to join the AWS partner network. It was among the first audited and approved AWS managed service partners, and it even has the distinction of being the only cloud native partner to earn SOC2 compliance with a perfect score.

Over the years, it has added many prestigious clients like Conde Nast, Coca-Cola, Lenovo, Motorola and Yamaha. Due to this rapid growth, it has been adding more people to its rolls, with February alone seeing an addition of 20 more people. Overall, there about 160 employees so far, and the company is planning to grow to 200 people by the second half of 2017, to meet the demands of its growing client base.

In terms of its cloud presence, 2nd Watch claims that it has 400 enterprise workloads under management and more than 200,000 instances in its managed cloud services.

The success of this company once again brings out the growing cloud market, and the many opportunities it presents for small and medium companies to carve a niche for themselves. There are hundreds of companies today that offer specialized services, thereby making the cloud a more attractive and feasible option for many clients around the world.

The success rate of this company has helped it to raise $56 million so far, and going forward, it is only expected to have more business and a larger client base than now. According to the CEO of 2nd Watch, Doug Schneider, the firm doubled its revenues in 2016, and much of this can be attributed to the growing interest of companies across different sectors to take to the cloud. Almost every company today understands the power of cloud, and are sooner or later, expected to move to it.

Schneider opined that to meet the growing needs of its current and future clients, it needs more investment. Considering the astronomical growth this company has seen over the last year, funding should never be issue, as long as the money is used towards the right channels that will further propel growth.

It’s sure going to be an exciting ride for 2nd Watch.

The post 2nd Watch Gets a New Round of Funding appeared first on Cloud News Daily.

GitHub Offers New Business Option

GitHub has now become accessible to clients who want to host their complete project on the cloud. Recently, this company  has released a plan called “business” package to give customers the same features as those available on GitHub com, by hosting their code on GitHub’s own servers.

According to the CEO of GitHub, Chris Wanstrath, this package is to give customers a choice to host their code online, and away from their servers. He opined that his clients want to host their code on the cloud because of the many benefits that come with, and GitHub is simply making this a reality.

This is a strategic move by the company, given that GitHub is the world’s most popular repository for storing programming code. It is estimated that currently more than 20 million developers spread across one million teams use GitHub for storing their code. Also, there are an estimated 52 million projects on GitHub, and almost 1,500 teams are joining daily. In fact, out of the 50 largest companies in the US, almost 22 use this service as their code repository.

These numbers show the enormous reach that GitHub has in the programming community, and now it wants to further monetize its popularity. Already, it has an Enterprise offering under which companies can host their team’s code in a private cloud. This service is available since 2012, and the cost is $21 per month. Many premium customers such as Walmart, Ford, IBM, John Deere, the government of United Kingdom and NASA use this service.

This “business” package is also priced at $21 per month, and with this, the company wants to establish itself as a software company, and not just as a startup anymore.

While these packages work well for large corporations, the company has other plans for small and medium businesses. A package called “Team” is ideal for small businesses, and it is priced at $9 per month. The most basic package is called “Developer”, and it is priced at $7 per month. This package is ideal for an individual user only.

With these pricing options, GitHub is confident that it’ll bring more customers into its fold. Already, the cloud is becoming a preferred option for individuals and companies to store their code because it is easy to manage. Also, the fact that all the code is stored in a remote location means greater data redundancy and better protection against natural calamities and disruptions. Since cloud ensures business continuity, more companies are preferring to switch to it.

In addition, storing your code in the cloud offers the highest level of mobility and flexibility for employees. They are no longer restricted to office devices or network, and can work pretty much from anywhere and from any device. Since the current generation want to strike the right balance between personal and work life, such an option can help in attracting the best of talent.

 

These different advantages are what GitHub wants to tap into it. With a promise of 99.99% uptime and attractive pricing, GitHub is sure to expand its reach.

The post GitHub Offers New Business Option appeared first on Cloud News Daily.

Amazon Web Service’s Outage Impacts the Internet

Amazon’s S3 was down on Tuesday morning, and this created a partial chaos in the Internet world, though it didn’t completely break the Internet. AWS experienced a four hour outage, and this caused problems for the thousands of websites that depend on AWS for their storage and cloud computing needs.

Many major companies like Netflix, Pinterest, Spotify, Buzzfeed, Airbnb and Slack use AWS to store and retrieve their data at any time. An outage meant that these companies could not access the data they want, though the financial and operational loss for these companies is not known at this time. While this outage did not shut down websites completely, it affected certain functions like file sharing, collaboration and image uploads.

Again, not all its clients were affected, but still a substantial number experienced difficulties when a major chunk of S3 went offline on Tuesday. In fact, Amazon itself was affected by this outage, as it was not able to update its health dashboard for two hours, obviously because the data is stored on AWS.

AWS was not available for comments or updates on this issue immediately, but a message on their website read that they are working to fix the errors, and until then S3 customer applications will continue to have high error rates. Reports on Twitter pointed to the poor response given by AWS for such an outage. The dashboard of AWS did not display any real-time events or updates, despite the company acknowledging that there was an error.

Outages and the resulting impact on businesses is one of the worst fears of any organization, and this incident brought to light the data reliability issues that still plague the cloud industry, despite all the advancements made in these areas. Thankfully though, these incidents are not frequent, but when they come from large service providers, they become big news. The last outage from AWS that happened in September 2015 and lasted five hours.

While it’s not right to judge the industry as a whole and shun it completely, what is needed is a more cautious approach by everyone involved. Clients who depend on AWS should consider using a multicloud strategy or something like Nimble Cloud Volumes, that combines the power of Azure and AWS in the same platform. This way, even if there is a problem with one service, data reliability and business operations will not be affected in any way.

For AWS and other cloud providers, this outage may be a wake-up call, to increase the reliability and availability of their systems. This is definitely not something you would expect from the world’s largest cloud provider that has an annual revenue of more than $10 billion each year. It is estimated that more than one million companies, from established giants like GE to startups like Snap, and many government agencies depend on AWS for their data storage and availability.

It is hoped that such incidents don’t happen in the future, and cloud providers do whatever is necessary to increase the rate of data availability.

The post Amazon Web Service’s Outage Impacts the Internet appeared first on Cloud News Daily.