Addteq to Exhibit at @DevOpsSummit | @Addteq @Atlassian #APM #DevOps

SYS-CON Events announced today that Addteq will exhibit at SYS-CON’s @DevOpsSummit at Cloud Expo New York, which will take place on June 6-8, 2017, at the Javits Center in New York City, NY. Addteq is one of the top 10 Platinum Atlassian Experts who specialize in DevOps, custom and continuous integration, automation, plugin development, and consulting for midsize and global firms. Addteq firmly believes that automation is essential for successful software releases. Addteq centers its products and services around this fundamentally unique approach to delivering complete software release management solutions. With a combination of Addteq’s services and our extensive list of partners, we believe that we can provide the best solutions to fit your business. Locations include our Headquarters in Princeton, NJ with additional offices in Los Angeles, CA and Pune, India.

read more

Tech News Recap for the Week of 02/20/2017

Were you busy this week? Here’s a tech news recap of articles you may have missed for the week of 02/20/2017!

Over 90% of healthcare companies experienced data breach within past 2 years. Microsoft adds Kubernetes on Azure. Why ransomware is a big problem for small & medium businesses. Fuel faster application delivery with Cisco Hybrid Cloud. CIOs and the Life Sciences Industry: defining the IT roadmap. New Cisco VNF hardware, perimeter virtualization. Developing Cloud Native Applications with Cisco and more tops news this week you may have missed!

Remember, to stay up-to-date on the latest tech news throughout the week, follow @GreenPagesIT on Twitter.

Tech News Recap

Join us for our upcoming webinar:

“Harnessing Lightning: DevOps + ITOM for Secure & Compliant Hybrid Cloud Ops.”

Click here to register

By Jake Cryan, Digital Marketing Specialist

[session] @Nutanix Storage for #Containers | @DevOpsSummit #SDN #DevOps

In recent years, containers have taken the world by storm. Companies of all sizes and industries have realized the massive benefits of containers, such as unprecedented mobility, higher hardware utilization, and increased flexibility and agility; however, many containers today are non-persistent. Containers without persistence miss out on many benefits, and in many cases simply pass the responsibility of persistence onto other infrastructure, adding additional complexity.

read more

[session] SAP in the Cloud, Is It the Best for You and How? | @CloudExpo #SAP #Cloud #BigData

In his session at 20th Cloud Expo, Chris Carter, CEO of Approyo, will discuss the basic set up and solution for an SAP solution in the cloud and what it means to the viability of your company.
Chris Carter is CEO of Approyo. He works with business around the globe, to assist them in their journey to the usage of Big Data in the forms of Hadoop (Cloudera and Hortonwork’s) and SAP HANA. At Approyo, we support firms who are looking for knowledge to grow through current business process, where even 1% increase can lead to millions of dollars in business flow.

read more

Storj Labs Raises $3 Million

This is the perfect time to be a tech startup, as many angel investors and venture capitalists are looking for innovative products and ideas that would propel technological impact to new heights. Over the last year, many startups have successfully raised seed capital and funding. The latest in this list is Storj labs that raised $3 million in funding from companies such as Google Ventures, Qualcomm Ventures and Techstars.

Storj (pronounced “storage”) is a distributed cloud-service provider that has created a decentralized peer-to-peer solution. This service works by organizing users who are willing to rent out their spare hard drive space and bandwidth to customers who are looking for the same. All these users are connected through a peer network.

In the past, one of the biggest challenges of having a peer-to-peer cloud sharing technology was security. How can users know that their private data will not be accessed by other users in the same network? In fact, this concern was one of the reasons why this idea never took off in the first place.

Storj claims that it has found a secure solution for this problem. Their system enables users to store their unused space in a secure and decentralized location using block chain features such as a transaction ledger, public or private key encryption and cryptographic hash functions. In this sense, Storj  has brought block chain technology back to the world to ensure that files are safe and not accessed by unauthorized users.

So, what’s the advantage of choosing Storj over the leading providers like AWS and Microsoft?

Cost. AWS and Microsoft need large datacenters to store the data, not to mention the high cost of physical space and electricity needed to power it. This cost is passed on to developers and users. In addition, there is always a possibility for physical servers and networks to have problems, and this can lead to data failure or loss.

In stark contrast, the services offered by Storj is cost-effective because the money goes towards users who are renting out their space. Since they don’t need physical space or electricity, the cost per unit of consumption is low. Besides being cost-effective, users are in control over their device and data because of the decentralized model.

There are no central servers that can be compromised, so data is safe. In addition, the Storj system uses client-side encryption, which means, only the end users have access to their keys needed to decrypt files. This way, security is Storj’s biggest selling point.

Such a unique model has seen good response from both users and customers. Currently, Storj has about 7,500 users who rent out their hard drive space, and about 15,000 API users worldwide who use them.

Storj has also entered into an agreement with Heroku, a cloud-based platform-as-a-service (PaaS). This partnership helps developers to build and run their applications completely in the cloud, as Storj offers them a distributed object storage solution complete with advanced features such as encryption.

Little wonder then that Storj Labs has raised a funding of $3 million. It could be the first of many more to come.

The post Storj Labs Raises $3 Million appeared first on Cloud News Daily.

Enterprises at risk of ‘significant overspend’ on cloud services, research warns

1&1 continues to be the best value for money cloud provider with Azure and Amazon Web Services (AWS) trailing, according to the latest report from cloud performance benchmarking firm Cloud Spectator.

The annual report, of which regular readers of this publication will already be aware, this time covers the US infrastructure as a service (IaaS) space, and aims to show that biggest is not always best.

10 cloud service providers were analysed, including the four largest players in the market – AWS, Azure, Google, and IBM SoftLayer – who had to be able to provide self sign-up, persistent block storage, and hourly billing intervals. The vendors were ranked on the median performance of vCPU-memory and storage, and ranked out of 100 when compared on price and performance.

With 1&1 as the leader and given 100 out of 100 as the benchmark, the major players struggled to get even half of that number. Google (48) was the best performer, while Azure (27), AWS (24) and SoftLayer trailed off significantly.

When it came to virtual machine performance, the bigger players were seen as most reliable – Amazon, Azure and Google scored lowest on performance variability – yet when it came to overall median performance, 1&1, Rackspace and OVH were in front. For block storage, Rackspace was on its own in terms of performance levels, with the majority not being too variable, Amazon’s disk variability being artificially high due to an anomaly aside.

“The 2017 highlights considerable differences in price, performance and stability across the leading IaaS providers,” said Kenny Li, Cloud Spectator CEO. “More than ever, the enterprise consumer is at risk of significantly overspending when it comes to selecting the right cloud products and vendors.”

Of course, the argument will be that the convenience and reliability of choosing an established player – the top four vendors in cloud infrastructure own more than half of the market, according to Synergy Research – helps it pay for itself. Yet speaking to this publication last year, Li emphasised the importance of due diligence. “When it comes to price performance, we see many smaller players find an advantage by offering high-performance infrastructure at a very competitive price,” said Li.

“The volume [from the hypervendors] also comes with additional performance considerations, such as throttling to provide a standard user experience across the entire infrastructure, which may result in lower performance on cloud services.”

The 10 vendors assessed were 1&1, Amazon, Azure, CenturyLink, DigitalOcean, Dimension Data, Google, OVH, Rackspace, and SoftLayer. You can find out more about the report here (registration required).

Enterprises at risk of ‘significant overspend’ on cloud services, research warns

1&1 continues to be the best value for money cloud provider with Azure and Amazon Web Services (AWS) trailing, according to the latest report from cloud performance benchmarking firm Cloud Spectator.

The annual report, of which regular readers of this publication will already be aware, this time covers the US infrastructure as a service (IaaS) space, and aims to show that biggest is not always best.

10 cloud service providers were analysed, including the four largest players in the market – AWS, Azure, Google, and IBM SoftLayer – who had to be able to provide self sign-up, persistent block storage, and hourly billing intervals. The vendors were ranked on the median performance of vCPU-memory and storage, and ranked out of 100 when compared on price and performance.

With 1&1 as the leader and given 100 out of 100 as the benchmark, the major players struggled to get even half of that number. Google (48) was the best performer, while Azure (27), AWS (24) and SoftLayer trailed off significantly.

When it came to virtual machine performance, the bigger players were seen as most reliable – Amazon, Azure and Google scored lowest on performance variability – yet when it came to overall median performance, 1&1, Rackspace and OVH were in front. For block storage, Rackspace was on its own in terms of performance levels, with the majority not being too variable, Amazon’s disk variability being artificially high due to an anomaly aside.

“The 2017 highlights considerable differences in price, performance and stability across the leading IaaS providers,” said Kenny Li, Cloud Spectator CEO. “More than ever, the enterprise consumer is at risk of significantly overspending when it comes to selecting the right cloud products and vendors.”

Of course, the argument will be that the convenience and reliability of choosing an established player – the top four vendors in cloud infrastructure own more than half of the market, according to Synergy Research – helps it pay for itself. Yet speaking to this publication last year, Li emphasised the importance of due diligence. “When it comes to price performance, we see many smaller players find an advantage by offering high-performance infrastructure at a very competitive price,” said Li.

“The volume [from the hypervendors] also comes with additional performance considerations, such as throttling to provide a standard user experience across the entire infrastructure, which may result in lower performance on cloud services.”

The 10 vendors assessed were 1&1, Amazon, Azure, CenturyLink, DigitalOcean, Dimension Data, Google, OVH, Rackspace, and SoftLayer. You can find out more about the report here (registration required).

How the IoT and mobility has made cloud more than a ‘nice to have’

In its Worldwide Cloud IT Infrastructure Hardware Spending Forecast, 2016–2020, IDC forecast that the spending on cloud IT infrastructure would grow at a compound annual growth rate (CAGR) of 15.6 percent and reach $54.6 billion by 2019. As the move to the cloud becomes inexorable for organisations, they are faced with the important task of properly managing this significant architectural change.

Perhaps the most critical facet of digital transformation management is security. Storing data on someone else’s server has proven to be a hindrance for many organisations and industries. Storing vacation photos on Facebook is one thing. Storing personal, medical or financial records or transactions on a public domain is a completely different animal. As we know, any organisation that manages, stores or transmits this type of data must comply with government and association compliance directives that specify how to handle the security of customers’ data.

The impact of SaaS

The IoT and mobility have made the cloud much more than a nice-to-have tool. Today it is a critical factor for any business trying to compete in digital transformation. To understand the increasing need for online/cloud tools, it’s important to understand the underlying software that makes digital transformation what it has become today.

Software delivery began with this model: an organisation would buy a number of enterprise application seat licenses and install a client on each machine in the company. These licenses often came bundled with hefty support contracts, increasing the cost of the product.

The internet changed all that. With its widespread availability, software as a service (SaaS) replaced that outdated model. SaaS made it possible for software developers to move their products to an online environment where businesses and consumers could download software from the cloud. As this new delivery method grew in popularity, it provided the means to effectively move the software away from the customer site to the developer site where it could be managed, updated, distributed and controlled by the application creators as needed.

SaaS enabled developers to release software more quickly and efficiently, add features and updates as needed and deliver cyber security patches on the fly. The cloud provided the mechanism by which an entire industry could change its distribution model. It also paved the road for a major change in how software organisations developed their product, whereby they could move away from the traditional, time-consuming and painful waterfall development cycle to the agile approach.

As a result of SaaS’s success, platform as a service (PaaS) and infrastructure as a service (IaaS) have taken off as well. Core systems that underpin an organisation’s combined SaaS footprint and online presence can today be outsourced to firms that specialise in a given technology or protocol. Efficiencies are gained by spreading the cost out across many different customers.

As for security, the service, platform or technology providers mitigate risk via powerful SLAs. These SLA agreements serve to hold the vendor true to their word that their product or service will always be available and operate within a certain window of expected performance. The cost benefits are significant, and they are increased when more physical or logical capacity is ordered, as the costs are incremental based on usage instead of the capital expenditure associated with traditional scaling tactics.

Mobility and software delivery attained new heights due to these “as a service” offerings by lowering costs and improving efficiency, they also created a host of new challenges, due in large part to the aged internet infrastructure.

Undergirding the modern internet

As the internet advances, many of the infrastructure elements that have undergirded it cannot keep pace. One of the key elements of this vast global infrastructure is the Domain Name System (DNS), developed early on in the internet’s history so that people could get to the website they needed without memorising the string of numbers that made up its IP address. DNS has become the gateway to almost every application and website on the internet.

Old-school DNS models just don’t fit the bill anymore. Next-generation solutions are available today that allow businesses to enact traffic management in ways that were previously impossible, including:

  • Dealing with traffic spikes—planned or unplanned—by using scalable infrastructure.
  • Devising business rules thatuse filters with weights, priorities and even stickiness to meet your applications’ needs.
  • Geofencing that ensures compliance with jurisdictional restrictions. So then, users in the EU are only serviced by EU data centres, for instance, while ASN fencing ensures all users on China Telecom are served by Chinacache.
  • Automatically adjusting the flow of traffic to network endpoints, in real time, based on telemetry coming from endpoints or applications. This can help prevent overloading a data centre without taking it offline entirely and seamlessly route users to the next nearest datacenter with excess capacity.
  • Monitoring endpoints from the end user’s perspective, with the ability to send requests coming from each network to the endpoint that will service them best.

In addition, redundancy has come into the spotlight lately as organisations recognise that DNS can represent a single point of failure if there is no backup. It’s becoming more common for enterprises to deploy redundant DNS to mitigate this risk of major downtime. Managed DNS is also on the rise, in which customers benefit from a provider’s globally anycasted DNS networks to achieve maximum reliability and fast performance.

The new infrastructure

Digital transformation brings organisations to the decision point of whether to continue supporting what they have or to replace it. The capital and operational expenses needed to replace hardware and software can exceed the costs of replacing systems with a cloud solution.

Along with that decision in the cloud migration journey is the option to take advantage of modern software solutions that help remove routing issues and keep traffic flowing. Redundant DNS is one of them, distributing an organisation’s resources so that systems stay up to date, can be scaled horizontally and enjoy greater uptime. Focusing on infrastructure enables companies functionalities that make the internet work for them, not the other way around. 

How the IoT and mobility has made cloud more than a ‘nice to have’

In its Worldwide Cloud IT Infrastructure Hardware Spending Forecast, 2016–2020, IDC forecast that the spending on cloud IT infrastructure would grow at a compound annual growth rate (CAGR) of 15.6 percent and reach $54.6 billion by 2019. As the move to the cloud becomes inexorable for organisations, they are faced with the important task of properly managing this significant architectural change.

Perhaps the most critical facet of digital transformation management is security. Storing data on someone else’s server has proven to be a hindrance for many organisations and industries. Storing vacation photos on Facebook is one thing. Storing personal, medical or financial records or transactions on a public domain is a completely different animal. As we know, any organisation that manages, stores or transmits this type of data must comply with government and association compliance directives that specify how to handle the security of customers’ data.

The impact of SaaS

The IoT and mobility have made the cloud much more than a nice-to-have tool. Today it is a critical factor for any business trying to compete in digital transformation. To understand the increasing need for online/cloud tools, it’s important to understand the underlying software that makes digital transformation what it has become today.

Software delivery began with this model: an organisation would buy a number of enterprise application seat licenses and install a client on each machine in the company. These licenses often came bundled with hefty support contracts, increasing the cost of the product.

The internet changed all that. With its widespread availability, software as a service (SaaS) replaced that outdated model. SaaS made it possible for software developers to move their products to an online environment where businesses and consumers could download software from the cloud. As this new delivery method grew in popularity, it provided the means to effectively move the software away from the customer site to the developer site where it could be managed, updated, distributed and controlled by the application creators as needed.

SaaS enabled developers to release software more quickly and efficiently, add features and updates as needed and deliver cyber security patches on the fly. The cloud provided the mechanism by which an entire industry could change its distribution model. It also paved the road for a major change in how software organisations developed their product, whereby they could move away from the traditional, time-consuming and painful waterfall development cycle to the agile approach.

As a result of SaaS’s success, platform as a service (PaaS) and infrastructure as a service (IaaS) have taken off as well. Core systems that underpin an organisation’s combined SaaS footprint and online presence can today be outsourced to firms that specialise in a given technology or protocol. Efficiencies are gained by spreading the cost out across many different customers.

As for security, the service, platform or technology providers mitigate risk via powerful SLAs. These SLA agreements serve to hold the vendor true to their word that their product or service will always be available and operate within a certain window of expected performance. The cost benefits are significant, and they are increased when more physical or logical capacity is ordered, as the costs are incremental based on usage instead of the capital expenditure associated with traditional scaling tactics.

Mobility and software delivery attained new heights due to these “as a service” offerings by lowering costs and improving efficiency, they also created a host of new challenges, due in large part to the aged internet infrastructure.

Undergirding the modern internet

As the internet advances, many of the infrastructure elements that have undergirded it cannot keep pace. One of the key elements of this vast global infrastructure is the Domain Name System (DNS), developed early on in the internet’s history so that people could get to the website they needed without memorising the string of numbers that made up its IP address. DNS has become the gateway to almost every application and website on the internet.

Old-school DNS models just don’t fit the bill anymore. Next-generation solutions are available today that allow businesses to enact traffic management in ways that were previously impossible, including:

  • Dealing with traffic spikes—planned or unplanned—by using scalable infrastructure.
  • Devising business rules thatuse filters with weights, priorities and even stickiness to meet your applications’ needs.
  • Geofencing that ensures compliance with jurisdictional restrictions. So then, users in the EU are only serviced by EU data centres, for instance, while ASN fencing ensures all users on China Telecom are served by Chinacache.
  • Automatically adjusting the flow of traffic to network endpoints, in real time, based on telemetry coming from endpoints or applications. This can help prevent overloading a data centre without taking it offline entirely and seamlessly route users to the next nearest datacenter with excess capacity.
  • Monitoring endpoints from the end user’s perspective, with the ability to send requests coming from each network to the endpoint that will service them best.

In addition, redundancy has come into the spotlight lately as organisations recognise that DNS can represent a single point of failure if there is no backup. It’s becoming more common for enterprises to deploy redundant DNS to mitigate this risk of major downtime. Managed DNS is also on the rise, in which customers benefit from a provider’s globally anycasted DNS networks to achieve maximum reliability and fast performance.

The new infrastructure

Digital transformation brings organisations to the decision point of whether to continue supporting what they have or to replace it. The capital and operational expenses needed to replace hardware and software can exceed the costs of replacing systems with a cloud solution.

Along with that decision in the cloud migration journey is the option to take advantage of modern software solutions that help remove routing issues and keep traffic flowing. Redundant DNS is one of them, distributing an organisation’s resources so that systems stay up to date, can be scaled horizontally and enjoy greater uptime. Focusing on infrastructure enables companies functionalities that make the internet work for them, not the other way around. 

Protect your Mac against risks such as ransomware and shadow IT

It’s more important than ever to protect your digital assets from increasing risks and threats like ransomware and shadow IT. In an earlier blog post, we explained these two serious risks and gave you some tips to protect yourself from them. Today, we would like to go into more detail and invite you to our […]

The post Protect your Mac against risks such as ransomware and shadow IT appeared first on Parallels Blog.