Squaring the hybrid cloud circle: Getting the best out of all scenarios

There is clearly a growing need and place for both public and private clouds. But users are increasingly looking for solutions that give them the best of all worlds by seamlessly interconnecting the two together into hybrid solutions. In addition, many organisations need to encompass legacy IT systems so that they operate as seamlessly as possible alongside the hybrid cloud environment. 

It’s a tall order. However, there are already clear signs that such transformative solutions are making the step from concept to reality.

For one, some of the major public cloud providers are stepping up to make the development and deployment of hybrid solutions more straightforward. The newly launched Microsoft Azure Stack, for example, is intended to allow organisations to run Azure IaaS and PaaS services directly within their own data centres, whether in-house or in their chosen colocation facility.

On paper, this allows organisation to enjoy the full range of public Azure services on their own hardware, while also moving private workloads seamlessly between their chosen data centre and the Azure public cloud. The major advantages here are continued ownership of core and mission critical applications in a private cloud while also receiving the added benefits of continuous software updating and automated backups delivered with Azure public cloud service.

Such initiatives are clearly essential for getting hybrid clouds well and truly off the ground. There are many organisations out there, especially more heavily regulated ones, demanding the retention of private cloud infrastructures and certain legacy systems. An organisation might be happy enough using an Internet-based public cloud development platform for testing new applications, but not once it goes into production.

In practice, whether in-house or off-premise, the data centres supporting these hybrids will need to be equipped with fit for purpose IT infrastructure, suitable cooling and sufficient power to scale and manage the increasing draw of high density racks. They will also need highly skilled engineering personnel on hand as hybrid clouds are complex animals and cannot be built, tested and managed successfully without suitable facilities and training. High levels of physical and cyber security are also going to be of more importance than ever.         

But, above all, as demand for hybrid cloud environments continues to grow data centres must meet user expectations for application responsiveness and predictability. With the considerable amounts of data moving back and forth between the public and private cloud environments, and possibly legacy systems, a hybrid approach brings both latency considerations and the cost of connectivity  sharply into focus.

Taking Azure Stack as a working example, it is not designed to work on its own, rather alongside Azure Public Cloud as a peer system.  Therefore, latencies between the Azure Stack system and the Azure Public Cloud will determine how fast and seamless a hybrid cloud system is once deployed. 

Trans-facility networking

However, few private data centres will be able to afford to run the dedicated network links necessary for assuring consistent performance on an ongoing basis for workloads that may have variable resource needs.  While for ‘standard’ interlinks between existing Microsoft environments and Azure Public Cloud, Microsoft offers ExpressRoute as a low-latency dedicated connection, it is only available as a trunk connection to certain colocation, public cloud and connectivity operators.  These can connect directly with ExpressRoute at core data centre speeds and so largely eliminate latency issues and ensure bandwidth is optimised.

For those private or colocation data centres not directly connected, the only alternative is to find an equivalent fast and predictable connection from their facility to an ExpressRoute partner end point to make use of the system. As such, organisations using ExpressRoute for their own private data centre will still have to deal with any latency and speed issues in the ‘last mile’ between their facility and their chosen ExpressRoute point of presence.  This is the case even where connectivity providers are offering ExpressRoute to a private or colocation facility as they are layering their own connectivity from the edge of their network and the ExpressRoute core to the edge of the user network. 

In addition, if an organisation is planning on using a colocation facility for hosting some or all the hybrid cloud environment but keeping legacy workloads operating in its own data centre, the colo must offer a range of diverse connectivity options. Multiple connections running in and out of the facility will assure maximum performance and resilience.

In summary, the major cloud providers and data centre providers are working hard to meet growing demand for ‘best of all worlds’ hybrid cloud solutions. However, delivering the predictable and seamlessly interconnected public, private and legacy environments that users really want will call for fit for purpose trans-facility networking. This is essential for squaring the circle and enabling the unified fully automated computing environments enterprise organisations are searching for.  

Assessing data centre strategies for cloud-scale software

The rapid advancement of cloud-scale software is driving the digital transformation impacting nearly every facet of our life. The ways we work, communicate, navigate, travel, shop, manage our money, access healthcare, interact with things and places are extremely different from only five years ago.

There are many underlying technologies that enable this transformation – but none as profound as the rise of cloud-based software applications that we interact with throughout our day. 

IDC notes that cloud software accounts for nearly one-third of the $400bn+ software market and is driving virtually all of the organic growth in the industry. This category of technology includes software as a service offerings; eCommerce sites; fintech and health tech apps. It includes emerging segments including IoT-enabled applications and AI-enabled lifestyle applications like autonomous vehicles and smart homes. 

Data centre strategies for cloud software are different than those that support traditional software applications. Whether hosted in public IaaS or in a private or managed private cloud, it is important to recognise it requires several characteristics that make modern cloud software unique from software hosted locally or in an enterprise data centre.

Application multi-tenancy

Cloud software is by nature multi-tenant – supporting many users from different organisations on shared infrastructure but with logical divisions between different users’ private data.  Supporting consistent user experience means application infrastructure must adapt to surges in activity across the full user base.

Agile development and DevOps

One of the benefits of managing software in a central cloud-scale data centre is the ability to release new functionality more frequently. The rise of agile development and DevOps lets organisations role out new software releases weekly or even more frequently. Infrastructure bust be adaptable in order to not be a bottleneck in introducing new functionality.

Scale

Modern software-driven businesses are built on the assumption they will scale. More users, more devices, more data. This requires data centre infrastructure strategies that can scale seamlessly and predictably with the businesses they support. It also means sensible cost strategies. One of the goals of any business is to realise economies of scale as they grow. One challenge with public IaaS offerings is that cloud storage and computing bills rise nearly linearly with the business.  Ideally data centre infrastructure will continue to deliver performance and responsiveness to applications as they scale while improving cost-efficiency.

Powerful analytics

Probably the most significant requirements for modern cloud software is the growing importance of sophisticated analytics-based functionality. Real-time analytics are at the core of personalised user experiences in ecommerce platforms. Analytics are behind the sophisticated business insights at the core of modern enterprise applications. Analytics drive the machine learning important for AI enabled applications. And analytics power the new semantic interfaces and chatbots that are re-shaping how users interact with software. 

Data centres need to be able to manage these analytics on the scale of traditional high performance computing applications. Scale-out, all-flash storage has been a key enabler to delivering analytics performance at scale. Emerging technologies based on low-latency NVMe networking and advanced solid-state memory designs promise to enable even greater levels of infrastructure performance than available today.

As product, DevOps, IT, and finance teams chart out data centre strategies to support cloud scale software applications, they must navigate an increasingly complex ecosystem of technologies, service providers, and architectural paradigms. The next generation of technology and service providers will need to help reconstruct the data centre technology stack to support the needs of modern, cloud software.

Government Cloud is growing by the day

Governments world over are adopting cloud in a big way to leverage the many benefits that come from it. This foray into cloud, popularly known as government cloud market, is expected to explode over the next few years.

A report titled “Government cloud market by solution” has been released by the research firm MarketsandMarkets. According to this report, the market size of this industry is likely to grow from $15.4 billion in 2017 to a whopping $28.85 billion by 2022. That’s almost double the growth within just the next five years, accounting for a compound annual growth rate (CAGR) of 13.4 percent.

These numbers clearly show that government cloud is growing by the day. But why, you may wonder.

Much of this growth is driven by factors such as

  • Low IT costs
  • Availability of a ton of solutions, so governments can choose the one that is best compliant with their standards.
  • Little to no dependence on humans for operations and maintenance. This means the employee costs is negligible for governments
  • Hassle-free maintenance and obviously there is no need for a large IT department to monitor operations.
  • Compliance with most standards
  • Instant access from anywhere and the associated flexibility with respect to working hours and locations for employees
  • Easy deployment of cloud storage solutions. A lot of it can be deployed within just a few ours, thereby saving time and effort for governments.

Due to these benefits, government cloud is all set to boom over the next few years.

The report further states that out of all the different segments, the cloud storage segment is likely to see the highest growth.  This is not really a surprise considering the vast amounts of data that local, state and national governments have to collect and maintain to provide the right benefits to the right people.

The next sector that is likely to see major growth is the integration and migration segment.  Already, many government agencies have started making the move to cloud solutions, so migrating their existing data to the new platform will require help. Also, there are many legacy systems that need to be integrated with cloud, so solutions that bridge this gap will be in great demand.

Region wise, North America is expected to have the highest contribution to government cloud market while the Asia Pacific region is likely to have the highest growth during this period.

Overall, this is likely to be a period of high growth for government cloud and companies of all sizes, ranging from the mighty AWS and Microsoft to small startups are looking to cash in on this boom.

The post Government Cloud is growing by the day appeared first on Cloud News Daily.

IaaS proving increasingly fundamental to progressive cloud strategies, says Oracle

More than two thirds of respondents in a survey from Oracle said they see infrastructure as a service (IaaS) as ‘fundamental’ to progressive cloud strategies – up 8% in the past quarter.

The study, the second in a series, polled 1,610 IT professionals across nine countries and three continents and found a combination of new services coming to market and the growing maturity of existing cloud deployments led to the increase.

An overwhelming 94% of respondents say they had adopted IaaS – again up 8% from the previous quarter – while two in three (66%) argue businesses not investing in IaaS will find themselves struggling to keep up with those who are.

When it came to benefits, enhanced security was cited by more than half (52%) of those polled. Apart from that it was the usual routine; improved productivity (cited by 56% of respondents), greater system speed (50%) and reduced operating costs (48%) were all seen as key.

If one problem persists, it is around the dreaded skills gap. More than a quarter (28%) of companies say IT skills shortages have been one of their biggest issues in rolling out IaaS – up from 21% in the previous quarter.

Yet the takeaway is simple: adopt IaaS or you will soon fall significantly behind. “Investments in cloud infrastructure are clearly paying off for businesses and we are seeing that very strongly in the UK,” said Jason Rees, senior director of Oracle Cloud Foundation Technologies in a statement.

“The UK is a diverse and inspiring business market, comprising of businesses from innovative startups to large traditional multinationals. What they have in common is cloud computing is helping them get where they need to go faster, whether they are chasing scale, or undergoing a reinvention for the digital economy,” Rees added.

“Their ambitions are being powered by the agility cloud infrastructure offers.”

As regular readers of this publication will be aware, Oracle has made significant recent announcements as it pushes its cloudy ethos. Earlier this month chairman and CTO Larry Ellison officially unveiled the company’s autonomous database, putting a few barbs Amazon’s way in the process. “This is the most important thing we have done in a long, long time,” said Ellison. “For years and years artificial intelligence did not live up to its promise, but there is a new type of AI… the first branch of artificial intelligence that really, really works.”

You can find out more about the Oracle research here.

AWS and Microsoft team up for deep learning project

The keenest rivalry in cloud computing may have just taken a slightly different turn: Amazon Web Services (AWS) and Microsoft are collaborating on an open artificial intelligence (AI) ecosystem.

The project, called Gluon, is a programming API which “allows developers of all skill levels to prototype, build, train and deploy sophisticated machine learning models for the cloud, devices at the edge and mobile apps,” in the words of a joint press release.

“The potential of machine learning can only be realised if it is accessible to all developers. Today’s reality is that building and training machine learning models requires a great deal of heavy lifting and specialised expertise,” said Swami Sivasubramanian, vice president of Amazon’s AI arm in a statement.

“We created the Gluon interface so building neural networks and training models can be as easy as building an app,” Sivasubramanian added. “We look forward to our collaboration with Microsoft on continuing to evolve the Gluon interface for developers interested in making machine learning easier to use.”

“As a society we face enormous challenges which AI has the potential to solve,” wrote Eric Boyd, Microsoft CVP AI data and infrastructure in a blog post. “However, developing with AI, especially deep learning models, isn’t easy – it can be a fairly daunting and specialised practice for most data professionals.

“We believe bringing AI advances to all developers, on any platform, using any language, with an open AI ecosystem, will help ensure AI is more accessible and valuable to all,” Boyd added.

Naturally, both companies have significant stakes in AI. Amazon has made several recent efforts in attracting developers, most recently around Alexa skills and migrating chatbots.

In terms of eyebrow-raising partnerships, this is certainly up there. It is perhaps not quite as bizarre as when Salesforce and Oracle teamed up in 2013 – something of a cessation in hostilities, with the former continually gauged as a barometer for the latter’s recent financial results – but in this instance, Microsoft and AWS realise that working together is better for the common good – their customers.

How service providers can secure the future of virtualised networks

Telecoms networks have undergone a big transformation recently, driven by the move from 3G to 4G, long term evolution networks (LTE) and the explosion of IoT connected devices into the market.

Earlier this year, Gartner forecasted that 8.4 billion connected “things” will be in use worldwide in 2017. Mobile devices have also experienced their own transformation, now every bit as powerful and ubiquitous as regular computers. The volume and variety of data they store has increased dramatically, putting more pressure on networks and service providers to meet the demand without disruption.

Recent advances in network software, namely software defined networking (SDN) and network function virtualization (NFV), have allowed service providers to transform their networks to meet the changing requirements from the customers they serve. The combined effect of the two offers better control of the network, flexibility when deploying services, scalability and the ability of full motion control for where in the network the virtual instance is run.

Another benefit which must be observed closely is security. This continues to be a major challenge for operators, as they look to provision and manage their network infrastructure while, at the same time, their customers must be able to run their own firewalls and virtual space on top of it.

Protecting the reputation of service providers

The threat landscape facing mobile networks has broadened from the SMS-based attacks of early mobile phone days, to a wider attack surface which threatens the device, application and network. For example, the success of Pokémon Go in 2016 gave rise to the number of rogue apps targeting users that promised cheats, tips, and other functionalities.

While the Mirai botnet was an example of hackers infecting vulnerable IoT devices as a weapon to carry out DDoS attacks on telecoms organisations. Earlier this year, the notorious WannaCry ransomware attack also wreaked havoc across the world, with companies such as Telefonica and the NHS targeted. This event gave rise to discussions about the role that service providers have to play in reducing the exposure to successful exploits. 

With the increasing variety and sophistication of threat vectors, including social engineering, malware, DDoS attacks, and more, it is critical for modern LTE network operators to protect themselves and their clients from potential attacks. Hackers can use their networks as weapons and therefore, the blame falls on them when they are unable to prevent the attacks from happening. Their reputation is at stake and their fate lies in their response to the threats.

Fortunately, the virtualisation of carrier networks has changed the way the industry views security. The Mirai attack was a wake-up call to service providers and an example of how much damage hackers can do with vulnerable devices. Service providers have responded by adding layers of security defences to protect against these attacks, whether originating from outside or inside the organisation.

Simplified security with virtualisation

The virtualisation of networks can make security easier and more cost-effective for service providers across infrastructure. For example, in the past, service providers would need to assess where the largest amount of traffic was coming from in the network to deploy security in response. However, virtualisation improves the ability to detect threats anywhere in the cloud and deploy security more efficiently. Once service providers have protected their physical infrastructure, orchestration tools make it easier to spin up a virtual firewall.

Still, the argument of simplified security should be approached with caution. The overall attack risk is potentially larger under NFV, with multiple control and data planes now present. Service providers are currently dealing with a physical layer where their SDN components are run, which is then abstracted for use by virtual instances. The physical infrastructure, as well as the virtual instances, have their own security requirements which must be met.  

Maintaining security and resilience

With the increased complexity that virtualisation brings, service providers need to look at the resources they want to protect and make a judgement on whether they are worth protecting. Once the decision has been made, they need to choose the best mitigation response for this type of attack. 

The core technologies to deploy would include DDoS defence and a Web Application Firewall (WAF), alongside the implementation of a Logging as a Service (LaaS) architectural model to understand what source is generating what type of attacks. This will enable the provider to determine the type of defences required to mitigate the attack, without affecting the services that are not under attack. 

Is hyperconvergence really key to your data centre cloud strategy?

Vendors often like to create a new name for an existing piece of technology that is essentially made up of the same components and fulfils the same functions. This is because of factors such as the competitive pressure to keep customers interested: application service provision is more commonly known today as the cloud, while converged infrastructure has led to hyperconverged infrastructure.

Sometimes there are actual technological differences between products, but this isn’t always the case. That’s because once a technology has reached its peak, the market could potentially drop off its perch. Vendors’ claims – and even media reports – should therefore be treated with a pinch of salt and some careful scrutiny.

For example, DABCC magazine (August 2017) highlighted: “Cloud is becoming a key plank in virtually every organisation’s technology strategy. The potential benefits are now widely understood, among them the ability to save money and reduce IT management overheads, meaning more resources can be ploughed into other parts of your business.”

The article, ‘Why Data Centre Hyperconvergence is Key to Your Cloud Strategy’, points out that moving to the cloud “…won’t necessarily deliver these benefits if done in isolation: organisations also need to look at their data centre operations and streamline how these are run. There’s a need to rationalise your data centre as you move to cloud.”

Cloud: Not for everyone

Let’s face it, the cloud isn’t for everyone, but nevertheless it has its merits. Yet before you go and invest in new technology or move to it, you should examine whether your existing infrastructure is sufficient to do the job you need it for. Ask yourself questions, including: “the hyperconvergence story: what’s really important?”

In response to this, David Trossell, CEO and CTO of data acceleration vendor Bridgeworks, notes: “We’ve been shouldered traditional system architecture for more than 50 years now”. He explains that there have only been a few significant changes along the way.  Apart from the likes of IBM, which has traditionally provided a one-stop shop, the company still purchases different parts of the system from different vendors.

This approach means customers can source parts for the most competitive price or which offer the best solution, from different vendors. However, the downside is the need to repeat the entire process of verifying compatibility, performance and so on.

“The other often unseen consequence is the time taken to learn new skill sets to manage and administer the varying products”, Trossell warns.  Yet he points out that there is increasing pressure on organisations’ budgets to spend less while achieving more for each pound or dollar.  This means that there is an expectation to deliver more performance and functionality from decreasing IT budgets.  

He adds: “With its Lego-style building blocks, where you add the modules you require knowing everything is interoperable and auto-configuring, increasing resources in an area becomes a simple task. Another key benefit is the single point of administration, which dramatically reduces the administrative workload and one product skill set.

“So, what about the cloud? Does this not simplify the equation even further?” he asks.  With the cloud, he says there’s no need to “…invest in capital equipment anymore; you simply add or remove resources as you require them, and so we are constantly told this is the perfect solution.”  To determine if it is the perfect solution, there is a need to examine other aspects of a cloud-only strategy. The cloud may be one of many approaches that’s needed to run your system.

Part of the story

Anjan Srinivas, senior director of product management at Nutanix – a company that claims to go beyond hyperconverged infrastructure – agrees that hyperconvergence is only part of the story.  He explains the history that led to this technological creation. “The origins of the name were due to the servers’ form factor used for such appliances in the early days,” he says. “The story actually hinges upon the maturity of software to take upon itself the intelligence to perform the functions of the whole infrastructure stack, all the way from storage, compute, networking and virtualisation to operations management, in a fault tolerant fashion.”

He adds: “So, it is fundamentally about intelligent software enabling data centre infrastructure to be invisible. This allows companies to operate their environments with the same efficiency and simplicity of a cloud provider. Hyperconvergence then becomes strategic, as it can stitch together the public cloud and on-premise software-defined cloud, making the customer agile and well-positioned to select multiple consumption models.”

Cost-benefit analysis

Trossell nevertheless believes that it’s important to consider the short-term and long-term costs of moving to the cloud: “You have to consider whether this is going to be a long-term or short-term process. This is about whether it is cheaper to rent or buy, and about which option is most beneficial.”

The problem is that although the cloud is often touted as being cheaper than a traditional in-house infrastructure, its utility rental model could make it far more expensive in the long-term; more than any the capital expenditure of owning and running your own systems.  

“Sometimes, for example, it is cheaper to buy a car than to rent one”, he explains.  The same principle applies to the cloud model. For this reason, it isn’t always the perfect solution.  “Done correctly, hyper-convergence enables the data centre to build an IT infrastructure capable of matching public cloud services in terms of elements like on-demand scalability and ease of provisioning and management”, adds Srinivas.

“Compared to public cloud services, it can also provide a much more secure platform for business-critical applications, as well as address the issues of data sovereignty and compliance. A hyper-converged platform can also work out more economical than the cloud, especially for predictable workloads running over a period.”

Silver linings

“Not every cloud has a silver lining”, says Trossell. He argues that believing the hype about the cloud isn’t necessarily the way to go. “You have to consider a number of factors such as hybrid cloud, keeping your databases locally, the effect of latency and how you control and administer the systems.”

He believes that there is much uncertainty to face, since the cloud computing industry expects the market to consolidate over the forthcoming years. This means there will be very few cloud players in the future. If this happens, cloud prices will rise and requests to cheapen the technology will be lost. There are also issues to address, such as remote latency and the interaction of databases with other applications.

Impact of latency

Trossell explains this: “If your application is in the cloud and you are accessing it constantly, then you must take into account the effect of latency on the users’ productivity. If most of your users are within HQ, this will affect it. With geographically dispersed users you don’t have to take this into account.

If you have a database in the cloud and you are accessing it a lot, the latency will add up. It is sometimes better to hold your databases locally, while putting other applications into the cloud.

“Databases tend to access other databases, and so you have to look at the whole picture to take it all into account – including your network bandwidth to the cloud.”

Your existing infrastructure, within your data centre and outside of it, therefore must be part of this ‘bigger picture’. So, with regards to whether hyperconvergence is the way to go, Trossell advises you to analyse whether you’re still able to gain a return on investment (ROI) from your existing infrastructure.

“Think about whether it is a has role in your cloud strategy”, he advises before adding: “With a hybrid cloud strategy can you downsize your data centre, saving on maintenance charges too. If you are going to go hyperconverged, then some training will be required. If you are going to use your existing infrastructure, then you will already have some skillsets on-site.”

He adds: “If the licence and maintenance costs of the existing infrastructure outweigh the costs of hyperconvergence, then there is a good business case for installing a hyperconverged infrastructure. This will allow everything to be in one place – a single point of administration.”

Data protection

There is still a need to consider data protection, which often gets lost in the balancing act. The cloud can nevertheless be used for backup as a service (BUaaS) and disaster recovery as a service (DRaaS), as part of a hybrid solution. Still, he stresses that you shouldn’t depend solely on the cloud and recommends storing data in multiple places.

This can be achieved with a solution, he claims, such as PORTrock IT: “If you decide to change over to the cloud, you need to be able to move your data around efficiently and at speed, as well as restore data if required. You need to keep it running to protect your business operations.”

Not just storage

Trossell and Srinivas agree that storage shouldn’t be your only consideration. “Storage is an important aspect, but that alone does not allow enterprises to become agile and provide their businesses with the competitive edge they expect from their IT”, says Srinivas. He argues that the advantage hyper-convergence offers is “the ability to replace complex and expensive SAN technology with efficient and highly available distributed storage, [which] is surely critical.

“What is critical, is how storage becomes invisible and the data centre OS – such as that built by Nutanix – can not only intelligently provide the right storage for the right application, but also make the overall stack and its operation simple”, believes Srinivas.

“Consider backups, computing, networks – everything”, says Trossell before adding: “Many people say it’s about Amazon-type stuff, but it’s about simplifying the infrastructure. We’re now moving to IT-as-a-service, and so is hyper-convergence the way to go for that type of service?”

Technology will no doubt evolve and by then, hyper-convergence may have transformed into something else. This means that it remains an open question as to whether hyper-convergence is key to your data centre cloud strategy.

It may be now, but in the future, it might not be. Nutanix is therefore wise to ensure that the Nutanix Enterprise Cloud “…goes beyond hyperconverged infrastructure.”  It would also be an idea for you to consider if there are other options which might serve your data centre needs better. That’s because some healthy scepticism can help us find the right answers and solutions.

Top tips for assessing whether hyperconverged is for you

  • Work out if there is any value in your existing infrastructure, and dispose of what no long has any value – which may not be all of it.
  • Calculate and be aware of the effects of latency on your users, including functionality and performance
  • Run the costings of each solutions out for three to fove years, and examine the TCO for those periods to determine which solution is right for your business
  • Look at what the savings on maintenance and licensing against the cost of moving to a hyper-converged infrastructure
  • Consider a hybrid solution – and don’t lose sight of your data protection during this whole process

Happy Friday the 13th from Parallels

It’s Friday the 13th and it’s October. Halloween is coming and things are getting spooky.  We have a Friday 13th Easter egg for you because we just love number 13 at Parallels. Hint: the about screen. Some of you have already noticed and commented on it. If you’re a @ParallelsMac 13 user, there’s an amusing […]

The post Happy Friday the 13th from Parallels appeared first on Parallels Blog.

At @CloudExpo Silicon Valley, @TidalScale ‏to Demonstrate How to Get Much More from the Cloud | #Cloud #Storage #BigData

SYS-CON Events announced today that TidalScale will exhibit at SYS-CON’s 21st International Cloud Expo®, which will take place on Oct 31 – Nov 2, 2017, at the Santa Clara Convention Center in Santa Clara, CA.
TidalScale is the leading provider of Software-Defined Servers that bring flexibility to modern data centers by right-sizing servers on the fly to fit any data set or workload. TidalScale’s award-winning inverse hypervisor technology combines multiple commodity servers (including their associated CPUs, memory storage and network) into one or more large servers capable of handling the biggest Big Data problems and most unpredictable workloads.

read more

Tech News Recap for the Week of 10/09/17

If you had a busy week and need to catch up, here’s a tech news recap of articles you may have missed for the week of 10/09/2017!

Achieving hyper-flexibility by migrating your network to AWS & Azure. Networking trends of 2017. Building a modern help desk. How Azure Stack helps deliver intelligent cloud and edge computing. and more top news this week you may have missed! Remember, to stay up-to-date on the latest tech news throughout the week, follow @GreenPagesIT on Twitter.

Tech News RecapTransform IT Security

Featured

IT Operations

[Interested in learning more about SD-WAN? DownloadWhat to Look For When Considering an SD-WAN Solution.]

Microsoft

  • How Azure Stack helps Microsoft deliver the promise of intelligent cloud and edge
  • What’s new in Microsoft Visual Studio Code 1.17
  • Microsoft is banking on social platforms for VR adoption
  • Microsoft: We’ll have two-thirds of Office users in the cloud by fiscal 2019
  • Microsoft just ended support for Office 2007 and Outlook 2007
  • What is Windows 10 Fall Creators update? Everything you need to know about Microsoft’s big upgrade

AWS

  • GE solidifies commitment to AWS for IT apps

Dell

  • Dell launches $1B IoT division to mold a world of smarter cities

Liquidware 

  • 10 ways FlexApp has raised the bar for layering

VMware

  • VMware Fusion 10 updates Mac virtualization app, adds High Sierra support and Pro features

Citrix

  • How using Citrix XenApp in the cloud helped Nudie Jeans extend access to application across continents

Cloud

Security

By Jake Cryan, Digital Marketing Specialist

While you’re here, check out this white paper on how to rethink your IT security, especially when it comes to financial services.