Microservices and Monoliths | @DevOpsSummit #AI #DevOps #Microservices

Is your application too difficult to manage? Do changes take dozens of developers hundreds of hours to execute, and frequently result in downtime across all your site’s functions? It sounds like you have a monolith! A monolith is one of the three main software architectures that define most applications. Whether you’ve intentionally set out to create a monolith or not, it’s worth at least weighing the pros and cons of the different architectural approaches and deciding which one makes the most sense for your applications. The proper design pattern can save your organization time and money, and can make your engineers a whole lot happier. But how will you know which is right for you? Read on to figure out if a monolith, microservices, or self-contained system makes the most sense for your development team.

read more

Tech News Recap for the Week of 04/10/17

Did you have a busy week? Here’s a tech news recap of articles you may have missed for the week of 04/10/2017!

New vSphere 6.5 features. Cisco fortifies storage throughput, analytics. Microsoft patches serious Word bug. How to avoid getting hooked by phishing scams. Shadow Brokers dump contained Solaris hacking tools. Cisco runs out two “critical” security warnings for IOS, Apache Struts. Windows Vista reaches its end and more tops news this week you may have missed!

Remember, to stay up-to-date on the latest tech news throughout the week, follow @GreenPagesIT on Twitter.

Tech News Recap

GreenPages Blog

VMware

Cisco

Microsoft

Fortinet

IT Operations

Security

Click here to download our recent webinar and find out how to embrace not resist DevOps and transform your IT with a next-gen IT Operations Transformation Framework

By Jake Cryan, Digital Marketing Specialist

What self-driving cars have to do with the Cloud?

The next big thing in the world of technology is Internet of Things (IoT). This is a technology that enables the smooth flow of data and communication across ordinary devices like your alarm clocks, refrigerators, cars and more. In fact, self-driving cars are becoming a reality sooner than we think is possible, and much of this has to do with advancements made in cloud and IoT.

So, what’s the connection between cloud and IoT?

Simply put, cloud infrastructure is the base technology that’s driving IoT including self-driving cars. All devices that want to communicate with each other need a common place to store and access data, and this is what cloud infrastructure provides. Regardless of the nature of the device and the function it performs, it can access and store data in the cloud.

This easy accessibility to data is what makes communication possible in the first place. For example, let’s say, your refrigerator has to monitor the level of available milk, and if it goes below a threshold, it has to automatically order it for you by communication with the app on your smartphone. In addition, this data has to be stored for analysis, so you know how often you’re buying milk and how much you’re spending on it.

All this communication and data exchange happens through the cloud infrastructure. Likewise, your wearable and self-driving cars also need cloud infrastructure to communicate across devices.

For now, we’ve established that cloud infrastructure is essential for self-driving cars and IoT in general. The next question is how we can extract value from it.

This depends on the way a company plans to monetize its infrastructure. Let’s take another example here. We have self-driving cars that are connected to a smartphone. As you drive through a shop, you get a notification that a list of items in that shop are on sale. You may want to stop and check it out or even buy from it. When this sale happens, the car company will get a commission on the sale value.

With such a strategy, the car company is able to get more value for its infrastructure than merely just stopping after a sale is done. In this sense, cloud is likely to create a continuous stream of revenue for the company. The above example is just one way of monetizing the cloud. If you think through and look closely, the options are endless as your cars can act as a central place of communication for the many activities you do.

In fact, these benefits are not just limited to the car manufacturer alone. The economic benefits can flow across different organizations operating across different sectors. In the above case, the economic benefits will accrue to the store, its suppliers and more. Overall, it can change the way we think and buy different commodities, and over time, the economic benefits will add up.

However, the underlying driver of this change is cloud infrastructure and this is why it is essential for self-driving cars.

The post What self-driving cars have to do with the Cloud? appeared first on Cloud News Daily.

Omnichannel Challenge| @DevOpsSummit @Catchpoint #DevOps #WebPerf

Almost all of the luxury brands that we work with are somewhere on the long and winding road between multichannel and omnichannel. For outsiders, this seems to be a small step but in reality, this is an extremely complex transition. In the luxury industry, multichannel often means that the brands have created an online channel (sometimes completely outsourced) that is often developed and managed completely separately from the offline channel (brick-and-mortar stores) with only limited integration. Both channels use their own order and warehouse management platforms, logistical systems, and sometimes even ERP and CRM systems.
Brick-and-mortar stores offered a luxurious shopping experience where customers could touch and feel the merchandise and provide immediate gratification. The online channel tried to attract customers with wide product selection, low prices, and additional content like product reviews and ratings.

read more

How to identify malicious content on the cloud?

Malicious content and code is unfortunately everywhere in the digital world. For every piece of genuine content, there are at least double the number of false or illegal content. Though there are many privacy and anti-spam laws, they are not as useful as they are expected to be.

This puts the onus right back on users like us. We have to learn to navigate the digital world by identifying malicious content from the genuine ones.

This becomes all the more imperative for companies that host their data and applications in the cloud, as they have much to lose from malware content. Though cloud offers a ton of benefits like increased productivity and reduce operational overheads, it has also opened up more chances for hackers and malware specialists to insert unwanted code into our applications.

In fact, this problem is more pervasive than what most people think. A study by Georgia Institute of Technology showed that 10 percent of cloud storage repositories were hacked in one way or another. Surprisingly, many of these cloud repositories act as distribution centers for malicious content, without the awareness of the owners.

This study is an important revelation as it helps businesses to understand the threat landscape in which they operate. Secondly, it can help companies to come up with appropriate solutions that’ll help to prevent these attacks or negate them, in the worst case. This way, the organization can prevent such malicious activities from impacting their organization, and more importantly, can curb their repositories from being the distribution centers.

The next big question is how can you identify good content from malicious one?

The same study compared two sets of data – a good set and a bad set, using which they were able to identify the features of a bad set. One of the first things they noticed is the presence of redirection. If a piece of code or data evaded discovery by a scanner or if it was used as proxy, then there’s a high possibility for such content to be spam. This is simple because any good content can be accessed legitimately.

Another big differentiator is the lifetime of the content. In general, malicious content had a short lifespan when compared to genuine content because it takes only a certain amount of time for the malicious content to get distributed across systems. Also, if the same content is present for a longer time, there’s a chance for it to be found out. So, malicious content have only a small lifespan as opposed to genuine content, which can remain in the cloud for many years.

So, what can client organizations do to prevent this malicious code? The answer depends on a host of factors. Firstly, organizations should talk with cloud providers to come up with basic protection mechanism on the infrastructure side to reduce the chances for malicious code to enter into the network. Organizations should also take similar steps to ensure that their network is not compromised either.

Alongside, organizations have to come up with some strategies to control access to unauthorized repositories, constant monitoring of assets and other strategies that it deems essential.

The post How to identify malicious content on the cloud? appeared first on Cloud News Daily.

DevSecOps: Embracing Automation | @DevOpsSummit #DevOps #DevSecOps

While I am all for traditions like Thanksgiving turkey and Sunday afternoon football, holding onto traditions in your professional life can be career limiting. The awesome thing about careers in technology is that you constantly have to be on your front foot. Because when you’re not, someone, somewhere, will be and when you meet them, they’ll win. One tradition that has a limited lifespan at this moment is waterfall-native development and the security practices that go along with them. While the beginning of the end might have first been witnessed when Gene Kim and Josh Corman presented Security is Dead at RSA in 2012, we have more quantifiable evidence from the 2017 DevSecOps Community Survey. When asked about the maturity of DevOps practices in their organizations, 40% stated that maturity was improving, while 25% said that it was very mature across the organization or in specific pockets.

read more

GreenPages-LogicsOne Celebrates 25 Years of Delivering IT Innovation

GreenPages CEO, Ron Dupler

This April we are celebrating GreenPages’ 25th anniversary. As we mark this occasion and on behalf of Team GreenPages, I want to thank our customers—the driving force behind our 25-year journey. I also want to thank our technology partners for your ongoing support and dedication to innovation and excellence.

In April of 1992, an entrepreneur named Kurt Bleicken founded GreenPages as a resource for corporate IT professionals to efficiently procure the hardware and software required to drive their IT initiatives during the early stages of the client-server computing revolution. Prior to founding GreenPages, Kurt spent considerable time speaking to corporate IT leaders to better understand their challenges and best serve their business needs. Kurt embedded “a better way” into our core business systems and founded GreenPages with a strong customer focus and commitment to world-class execution and service, delivered by a customer-focused team working collaboratively in a strong, employee-focused culture. GreenPages saw great growth and success during the 1990s fueled by Kurt’s founding vision.

As I often tell our team, we are living in an amazing period of human history and our 25-year journey has occurred at the epicenter of what makes our time remarkable: the information technology revolution. When GreenPages was founded, we lived in a much different world. Microsoft had just launched Windows backed by a $10 million publicity blitz. The internet browser had just been invented, but few people used or even knew what the internet was. Digital had just announced the Alpha chip to enable 64-bit computing. The JPEG standard had just been finalized, and a prototype SSD module had been submitted for evaluation by IBM. We were still in the early stages of what would become a tsunami of technology innovation that would change the way we live, work, and play, and the very nature of humanity itself.

Through the internet becoming a pervasive force in our lives, to the logical abstraction of workloads driven by the virtualization wave, to the emergence of cloud computing and arrival of the cloud-mobile era, Team GreenPages has evolved in close collaboration with our customers and technology partners. We have moved from a supply chain organization that offered a faster, better, and cheaper method for procuring IT goods, to an industry leader in cloud computing, offering strategic consulting, architecture, systems integration, and systems management for the hybrid cloud computing models fueling the digital era.

As we celebrate our 25th anniversary we are very excited about the road ahead. The pace of change is accelerating and we see tremendous opportunities for our organization, our customers, and our technology partners in the digital era. Today, we are focused on enabling technological innovation to fuel our customers’ digitalization strategies that enable both agility and business velocity. Speed is everything today. The art of IT innovation is delivering this needed agility in a secure and compliant manner, and ultimately the next generation computing platforms backed by the old-world mandates of security and compliance, which are more important than ever.

I expect the next 25 years to be even more amazing than the past 25 and look forward to celebrating our 50th anniversary with you in 2042. As we drive into the future, our customer focus and commitment will remain the constant amidst the tremendous waves of change sweeping across our industry. We will continue to strive every day to deliver outstanding technology-driven business results with the best, brightest, and most committed team in our industry.

Sincerely,

Ron Dupler
CEO, GreenPages

Research reveals extent of ‘aggressive’ hyperscale operator growth in cloud markets

Hyperscale operators are ‘aggressively’ growing their share of cloud service markets, according to the latest note from Synergy Research.

The analyst firm identifies 24 companies in all which meet its definition of ‘hyperscale’ – not surprisingly including the four main infrastructure players, Amazon Web Services (AWS), Microsoft, IBM, and Google – and argues these companies accounted for more than two thirds (68%) of the overall cloud infrastructure services market.

Back in December, Synergy noted that hyperscale providers operated more than 300 global data centres between them, expecting this number to surpass 400 by 2018. Of that figure, almost half (45%) of data centres were in the US, with China (8%), Japan (7%) and UK (5%) trailing far behind. The company says the current figure is now approaching 320.

This time around, the focus is on the growing dominance of the biggest players in cloud infrastructure markets, including infrastructure as a service (IaaS), platform as a service (PaaS), and private hosted cloud services. By comparison, in 2012 hyperscale operators accounted for 47% of each of those markets.

As Synergy puts it, the ‘scale of infrastructure investment required to be a leading player in cloud services or cloud-enabled services means that few companies are able to keep pace with the hyperscale operators…and they continue to both increase their share of service markets and account for an ever-larger portion of spend on data centre infrastructure equipment.’

“Hyperscale operators are now dominating the IT landscape in so many different ways. They are reshaping the services market, radically changing IT spending patterns within enterprises, and causing major disruptions among infrastructure technology vendors,” said John Dinsdale, research director and a chief analyst at Synergy. “Our latest forecasts show these factors being accentuated over the next five years.”

The value of hybrid: Operate for today and optimise for tomorrow

(c)iStock.com/rzoze19

The term “hybrid cloud” has grown in popularity among established technology vendors, but one could be forgiven for thinking it’s a convenient “cloudwash” for companies to showcase progress while delaying an inevitable shift to public cloud. I say that because to focus exclusively on cloud overlooks the reality that almost every CIO in an established enterprise today is contending with legacy as much as the need to embrace new technologies. Some critical data centre hosted systems just don’t move to cloud so easily.

It’s all simply IT, which is why I call the effective combination of public cloud, managed service provider cloud, and dedicated infrastructure “hybrid IT”. It is by no means intended as a “catch-all”, so I feel I owe it to the reader to explain precisely what I mean and how established companies are successfully exploiting the benefits of such choice on their terms.

Hybrid is about choice

At its core, hybrid is about choice: Choice to work with the appropriate combination of an organisation’s on-premises infrastructure, managed services, private cloud and public cloud infrastructure and services. With the right knowledge and information, companies can optimise IT to balance cost, risk and agility.

The first question to ask concerns value. Hybrid IT allows companies to choose the infrastructure or service that delivers the best value for them at that time. Over time, the value delivered could change and an application that is hosted on-premises could be migrated to a cloud IaaS or SaaS. Changes in cloud services (functionality and price), availability of skills, geographic requirements or software / hardware maintenance needs can all trigger the push-pull of value delivered.

Global presence is a big pull factor for cloud. Throughout the past two years the hyperscale cloud providers have battled for global coverage supremacy, opening data centres in important legal jurisdictions and population centres.  Here cloud offers low-latency local infrastructure that is becoming vital to companies to improve the experience of their global customers. It also presents options for data sovereignty and regulatory compliance.

Don’t forget the workload: It is common for business critical applications to have resource, locality, dependencies, or indeed hardware requirements that mean they are both complex and costly to migrate to cloud. Some companies will choose to outsource to a specialist to reduce their support costs, and may consider a cloud migration as factors change over time.

Hybrid IT allows enterprises to exploit legacy investment

Most clients we work with have legacy systems that are core to their business and have run for 10 years or more. These legacy systems often run on platforms such as mainframes, IBM i and previous-generation enterprise servers that cannot be transferred to a public cloud or a commodity Windows/Linux environment.

Hybrid IT has allowed them to continue maximising their return on investment in functional legacy systems, while migrating applications to the public cloud that are easily refactored for that environment. This prevents enterprises being hindered in their ability to adapt and innovate, while also freeing up capacity in legacy infrastructure to absorb increased demands.

Hybrid IT mandates integration

No-one can deny the benefits of cloud when it comes to flexibility and future capital savings, but the cost associated with a wholesale cloud migration can vastly outweigh the benefits. It is also important to note the complexities associated with transferring applications and data to the cloud. Some systems simply aren’t suited for cloud environments and others require significant re-engineering.

Here’s the rub: New systems of differentiation and innovation are usually best placed in public cloud environments, and built cloud-native, but these can rely on transactions and datasets residing in legacy systems, for example a retail website that relies on back-office inventory management. The successful hybrid IT solution must cater for this integration, and in doing so an enterprise with legacy technology gains the agility to transform without going all-in to public cloud.

Success in hybrid also mandates integration of IT real estate. I have spent more than 15 years helping companies make sense of each wave of cloud, and there are many common mistakes that result from a simple truth: It’s all IT and it all needs managing wherever it runs. If an established IT department doesn’t pay the same attention to developing its toolset, skills and processes for cloud management as it does on-premises management, then things will go wrong. Furthermore, those tools and processes must be integrated. Only then, can you benefit from the choice and flexibility of hybrid IT.

Keeping pace with new technology deployment and management is unsustainable for all but the largest IT departments. As cloud service choices continue to expand an increasing number of businesses are seeking specialist help from managed service providers who are able to provide the integration, specialist tooling and skills needed to address the complexity and breadth of hybrid IT and make it work for businesses.

Journey’s end?

Hybrid is about the long-term evolution of IT estates. Almost every company has IT distributed across multiple infrastructures and service providers, and companies that embrace the Hybrid IT management challenge are empowered to move forward.

As technology becomes ever more critical to business, the world is not going to get simpler. The ultimate goal must be to deliver a single, consistent service experience across the combination of hybrid IT environments you choose, enabling an organisation to adapt and change with business and technology needs.

Is NoOps the End of DevOps? | @DevOpsSummit #NoOps #DevOps #SDN #AI

Automation, a key pillar of the DevOps movement, frees IT operations to focus on higher-levl work and collaborate with cross-functional teams. But what if your automation is so good that developers don’t need you anymore? Mike Gualtieri of Forrester Research coined the term NoOps in his controversial blog post “I don’t want DevOps. I want NoOps.” In the post, Gualtieri says, “NoOps means that application developers will never have to speak with an operations professional again.”

read more