Brand owners are caught in a digital crossfire. From one direction comes intense competitive pressure to innovate or to at least follow very, very quickly. From the precisely opposite direction comes the potentially existential threat of an app very publicly flopping or – even worse – being very publicly revealed to jeopardize the customer’s well-being. Either way, you lose brand value in a social marketplace where brand is your primary currency.
What’s a brand owner to do?
Monthly Archives: January 2017
Cloud #APM | @CloudExpo @CAinc @Aruna13 #APM #DevOps #ML #CD #SDN
As the race for the presidency heats up, IT leaders would do well to recall the famous catchphrase from Bill Clinton’s successful 1992 campaign against George H. W. Bush: “It’s the economy, stupid.” That catchphrase is important, because IT economics are important. Especially when it comes to cloud. Application performance management (APM) for the cloud may turn out to be as much about those economics as it is about customer experience.
How APIs are enabling the future of IT infrastructure
(c)iStock.com/nadla
Companies are always looking for new ways to increase efficiency and reduce costs while maintaining excellence in the quality of their products and services. A big part of cloud computing that IT departments and service providers increasingly look to is APIs (application programming interfaces) to enable automation, in turn driving efficiency, consistency and cost savings. How are businesses doing this, and where are the opportunities for future development?
Enabling operational efficiency
One important outcome of the automation enabled by APIs is consistency. Through automation, businesses remove human error (and human expense) from operational processes. Even when a repeatable task is well-documented with a clear procedure, when human workers perform the task it is likely that you will end up with varied outcomes. On the other hand, if that repeatable task is automated, it will be performed in the same way every time, improving operational reliability and in turn operational efficiency. API enabled platforms are driving a true re-think in how we manage IT; we are moving quickly from a process-driven, reactive world to an automation-driven, proactive world.
Enabling DevOps automation
APIs allow for more dynamic systems that can scale up and down to deliver just the right amount of infrastructure to the application at all times. For example, instrumentation in your application that provides visibility to an orchestration layer can tell when more capacity is required in the web or app tier. The orchestration layer can then come back to the APIs provided by the infrastructure and begin spinning up new web servers and adding them to the load balancer pool to increase capacity. Likewise, systems built on APIs will then have the instrumentation to tell when they are overbuilt, for example at night and can then use the APIs to wind down unnecessary servers in order to reduce costs.
Indeed, through the ability to script the powering-on of development and testing environments at the start of the business day and automatically powering-off at the end of the business day, businesses can realize huge cost-savings on their hosting up to 50-60 per cent in some cases.
Overall, leveraging APIs in support of a DevOps strategy is always a blend of optimizing for cost, for performance and the ability to have deep app-level visibility.
Using APIs to automate reporting
APIs are also highly useful in reporting procedures, as many applications are now producing vast amounts of data that are often an untapped asset. IT teams therefore need to think about how to make those datasets available efficiently in order to build a dynamic reporting engine that can potentially be configured by the end user, who will be the person that understands the nature of the information that he or she needs to extract from the data.
This is frequently accomplished through APIs. IT teams and application services providers can use APIs to build systems that process the data and make it accessible to end users immediately, so that they do not have to go through a reporting team and do not lose any of the real-time value of their data.
API use in enabling business continuity and disaster recovery
The benefits of automation through APIs make them a crucial part of modern disaster recovery approaches. The assumption that you’ll be able to access all of the tools that you would need during a disaster through the typical user interfaces is not always true. In the modern world of highly virtualized infrastructure, APIs are the enabler for the core building blocks of disaster recovery, in particular replication, which is driven from the APIs exposed by the virtualization platforms. The final act of orchestrating DR, failover, is also often highly API dependent, for these reasons.
In essence, disaster recovery is one specific use case of the way that APIs enable efficiency and operations automation. Humans make mistakes and processes become very difficult to maintain and update. Therefore a DR plan based on processes and humans executing processes is not an ideal option to ensure the safety of your business in the event of a disaster. Kicking off DR can be likened to “pressing the big red button”. However, if you can make it one button that kick starts a set of automated processes, this will be much more manageable and reliable than thirteen different buttons, each of which has a thirty-page policy and procedure document that must be executed during a disaster.
The future of APIs
Despite the clear benefits of API-enabled automation and technology, the broader IT industry has not yet fully realized the potential of this technology, particularly in industries that have been leveraging information technology for a long time. In these industries, we are seeing a critical mass of legacy applications, legacy approaches to managing infrastructure, and legacy staff skillsets.
It is likely that the younger generation coming into the IT industry will move towards more comprehensive API use and maximize the value of APIs because this generation has grown up with them and learned with them. As we see disruptors displace incumbent packaged software players and new entrants to the enterprise IT community, we are likely to see more realization of the benefits of API use – particularly when utilizing their cloud infrastructures fully. However, this will take time, and we may be one to two full education cycles away from producing and maturing enough entry level IT professionals that have the education and training required to fully make use of the opportunities offered by APIs, particularly cloud ones.
Working as part of cloud computing solutions, APIs are also reducing the cost of developing new ideas. Businesses that want to innovate no longer need to make large upfront investment in equipment to get an idea off the ground. They can quickly start their business on cloud infrastructure as a service platforms and use APIs to control and power systems down to reduce costs as needed. As the new product or service grows, organizations can quickly scale on the same cloud infrastructure. And for it to truly cut costs, APIs should be part and parcel of cloud solutions – not a pricey addition to it.
As more and more innovative startups develop in the tech space, and enterprises increasingly search for new solutions and ways of working, we are likely to see even more creative uses of APIs to drive automation, consistency and efficiency. It’s important that businesses work to stay ahead of the market and competitors by making full use of new API-enabled software and other technologies to fully realize the benefits and cost-savings that they offer.
It Is Not About Security
Slide2There has certainly been no lack of punditry and controversy in the US regarding the hacking of John Podesta’s email account (along with the DNC email hack), with some claiming they were responsible for Mrs. Clinton’s loss in the election. I will leave the impact of these claims to those who write and talk about politics. I don’t discuss politics in a work setting, so will leave that aspect to them.
Bouncy Castle and Encryption | @DevOpsSummit #DevOps #SDN #AI #ML #DL
In September 2014, Apple made encryption default with the introduction of the iPhone 6. Then, in February 2016, a Los Angeles judge issued an order to Apple to help break into the encrypted iPhone belonging to a terrorist involved in a mass shooting. Apple had used some of the strongest encryption technologies and practices to protect its users and their data. The encryption technology did not discriminate between lawful and unlawful users. While there were many sides to this issue, it surfaced many important debates on security, privacy, and civil rights.
A Cloud Year for Australia and New Zealand (ANZ) Region?
Cloud is taking the world by storm. Currently, North America accounts for the highest revenue and top service providers call this region home, but the Asia-Pacific region is the fastest growing one. Europe and Africa are catching up in their own respective ways, and the Australia New Zealand (ANZ) region is not to be left behind.
In fact, it maybe a cloud year for companies in the ANZ region, going by a survey conducted by Computer Weekly/TechTarget. Their report titled IT Priorities Research shows that a significant number of CIOs who were interviewed as a part of the survey opined that they have already moved to the cloud or are planning to do so by 2017. Much of these cloud initiatives are expected in the areas of data center, storage, and backup, though other aspects such as cloud computing, IoT, and M2M are also expected to drive cloud adoption in the ANZ region.
To be precise, the report states that 41 percent of IT decision-makers are looking to be involved in some form of cloud storage initiative, while 36 percent will choose a cloud backup feature during this year. Besides these two areas, cloud is also going to play a big role in ANZ data centers as 39 percent of respondents plan to work around pure or hybrid cloud models within their data centers.
In addition, 54 percent of respondents believe that cloud computing will be a significant part of IT budget this year. This is an interesting revelation because in another survey, 38 percent of IT decision-makers in Australia and New Zealand expect their IT budgets to be flat or lower in 2017 when compared to the previous year. Putting these two together, we can say that even if budgets are going to remain stagnant, the fact that it much of it will be allocated for cloud, means that every IT decision-maker hopes to make the most of every dollar spent.
This report is sure to bring much cheer to cloud service providers of all sizes, as everyone can have a share in the pie, though the larger providers will have a significantly higher share than the smaller ones. Already, companies like Alibaba and Amazon Web Services (AWS) are setting up operations in Australia, or they are expanding their presence to cover more cities in this region. An example of such a move is the setting up of a large data center in Sydney by Alibaba, making it one of the largest ones outside of mainland China. Other providers like Microsoft, IBM, and Google are likely to catch up too, and it won’t be long before we see their presence in this region.
In all, this news exudes much optimism, as companies can make their operations more efficient and bring in higher revenues and profits. The cloud providers who setup shop here will employ more people, who in turn, will fuel more demand for goods and services. Eventually, these development are sure to augur well for the economies of both these countries as well.
The post A Cloud Year for Australia and New Zealand (ANZ) Region? appeared first on Cloud News Daily.
Larger organisations more likely to push ahead with DevOps initiatives, research argues
(c)iStock.com/Yuri_Arcurs
Almost half of respondents in a new study from Redgate Software say they have adopted a DevOps approach to their projects – with a further third planning to join them within the next two years.
The study, the firm’s latest State of Database DevOps survey, polled 1,000 companies globally with more than half employing at least 500 people. While 47% polled overall said they are already on the road with DevOps initiatives, this number rises to 59% among companies with more than 10,000 employees.
IT services and retail are the industries most likely to favour DevOps, alongside finance and healthcare, while government, education and non-profit are the laggards, according to the research. Only one in five respondents said they are applying practices such as continuous delivery to their databases and their applications.
The biggest problem businesses looking at initiating DevOps face, according to the study, is a lack of appropriate skills. For those with no intentions to move over right now, the major hurdles remain a lack of awareness of business benefits, as well as not enough budget to spend on new tooling.
Naturally, any move towards DevOps benefits different job roles in various ways. Redgate argues that developers are on board because they want to be freed to do more value-added work, while database admins are more driven by collaborating between development and operations teams, as well as the need to reduce application downtime.
For Redgate, the results are somewhat unsurprising. “We’ve been helping our customers to improve the way they make changes to their databases for over 17 years now,” said Kate Duggan, Redgate product marketing manager. “This survey has highlighted that our customers are facing increasing pressure to speed up the delivery of software, and include the databases in the same processes they use for their applications. It means we can ensure we’re in a good position to help them overcome the particular challenges the database brings.”
You can read the full report here (registration required).
Larger organisations more likely to push ahead with DevOps initiatives, research argues
(c)iStock.com/Yuri_Arcurs
Almost half of respondents in a new study from Redgate Software say they have adopted a DevOps approach to their projects – with a further third planning to join them within the next two years.
The study, the firm’s latest State of Database DevOps survey, polled 1,000 companies globally with more than half employing at least 500 people. While 47% polled overall said they are already on the road with DevOps initiatives, this number rises to 59% among companies with more than 10,000 employees.
IT services and retail are the industries most likely to favour DevOps, alongside finance and healthcare, while government, education and non-profit are the laggards, according to the research. Only one in five respondents said they are applying practices such as continuous delivery to their databases and their applications.
The biggest problem businesses looking at initiating DevOps face, according to the study, is a lack of appropriate skills. For those with no intentions to move over right now, the major hurdles remain a lack of awareness of business benefits, as well as not enough budget to spend on new tooling.
Naturally, any move towards DevOps benefits different job roles in various ways. Redgate argues that developers are on board because they want to be freed to do more value-added work, while database admins are more driven by collaborating between development and operations teams, as well as the need to reduce application downtime.
For Redgate, the results are somewhat unsurprising. “We’ve been helping our customers to improve the way they make changes to their databases for over 17 years now,” said Kate Duggan, Redgate product marketing manager. “This survey has highlighted that our customers are facing increasing pressure to speed up the delivery of software, and include the databases in the same processes they use for their applications. It means we can ensure we’re in a good position to help them overcome the particular challenges the database brings.”
You can read the full report here (registration required).
‘Security by design’ and adding compliance to automation
(c)iStock.com/maxkabakov
By Jason McKay, CTO and SVP of Engineering, Logicworks
Security is “job zero” for every company. If you are putting your customers or users at risk, you will not be in business for long. And that begins with taking a more proactive approach to infrastructure security — one that does not rely on the typical protective or reactive third party security tools, but builds security into your infrastructure from the ground up.
As your company moves to the cloud, it has an opportunity to start fresh and rethink who and what is responsible for security in your environment. You also want to be able to integrate security processes into your development pipeline and maintain consistent security configurations even as your applications constantly change. This has led to the rise of Security by Design.
The security by design approach
Security by design (SbD) is an approach to security that allows you to formalize infrastructure design and automate security controls so that you can build security into every part of the IT management process. In practical terms, this means that your engineers spend time developing software that controls the security of your system in a consistent way 24×7, rather than spending time manually building, configuring, and patching individual servers.
This approach to system design is not new, but the rise of public cloud has made SbD far simpler to execute. Amazon Web Services has recently been actively promoting the approach and formalizing it for the cloud audience. Other vendors promote similar or related concepts, often called Secure DevOps or Security Automation or Security-as-Code or SecOps. The practice becomes more important as your environment becomes more complex, and AWS actually has many native services that, if configured and orchestrated in the right way, create a system that is more secure than a manually-configured on-premises environment.
Does this mean that companies no longer need security professionals, just security-trained DevOps engineers? Not at all. When security professionals embrace this approach, they have far greater impact than in the past. This is actually an opportunity for security professionals to get what they have always dreamed of: introducing security earlier in the development process. Rather than retroactively enforcing security policies — and always being behind — they are part of the architecture planning process from Day 1, can code their desired specifications into templates, and always know that their desired configurations are enforced. They no longer need to be consulted on each and every infrastructure change, they only need to be consulted when the infrastructure templates change in a significant way. This means less repetitive busy-work, more focus on real issues.
Security by design in practice
In practice, SbD is about coding standardized, repeatable, automated architectures so that your security and audit standards remain consistent across multiple environments. Your goals should be:
- Controlled, standardized build process: Code architecture design into a template that can build out a cloud environment. In AWS, you do this with CloudFormation. You then code OS configurations into a configuration management tool like Puppet.
- Controlled, standardized update process: Put your CloudFormation templates and Puppet manifests in a source code management tool like Git that allows you to version templates, roll back changes, see who did what, etc.
- Automated infrastructure and code security testing as part of CI/CD pipeline: Integrate both infrastructure and code-level tests into code deployment process as well as the configuration management update process. At Logicworks, we often use AWS CodeDeploy to structure the code deployment process. You can also use Docker and AWS ECS.
- Enforced configurations in production: Create configuration management scripts that continually run against all your environments to enforce configurations. Usually hosted in a central management hub, and necessitates a hub-spoke VPC design approach.
- Mature monitoring tools with data subject to intelligent, well-trained human assessment: In compliant environments, your monitoring tools are usually mandated and logs must be subject to human review; we use native AWS tools like AWS CloudWatch, CloudTrail, and Inspector, as well as Alert Logic IDS and Log Manager and Sumo Logic to meet most requirements. SumoLogic helps us use machine learning to create custom alerts that notify our 24×7 Network Operations Center when unusual activity occurs, so that those engineers can take appropriate action with more accurate real-time data.
- Little to no direct human intervention in the environment…ever: Once all these tools are in place, you should no longer need to directly modify individual instances or configurations. You should instead modify the template or script to update (or more ideally, relaunch) the environment.
We have gone into significant technical depth into Logicworks’ security automation practices in other places; you can see our Sr. Solutions Architect’s talk about security automation here, watch him talk about our general automation practices here, or read this in-depth overview of our automation practices.
Here are some other great resources about Security by Design and Secure DevOps:
- AWS Security by Design White paper
- SANS Institute: Continuous Security: Implementing the Critical Controls in a DevOps Environment
Compliance + security by design
As you can imagine, the SbD approach has significant positive impacts on compliance efforts. The hardest thing to achieve in infrastructure compliance is not getting security and logging tools set up and configured, it is maintaining those standards over time. In the old world, systems changed infrequently with long lead-times, and GRC teams could always spend 2-3 weeks evaluating and documenting change manually (usually in a spreadsheet). In the cloud, when code gets pushed weekly and infrastructure is scalable, this manual compliance approach can severely limit the success of cloud projects, slow down DevOps teams, and frustrate both business and IT.
Running applications in the cloud requires a new approach to compliance. Ideally, we need a system that empowers developers and engineers to work in an agile fashion while still maintaining security and compliance standards; we need a toolchain that a) makes it easier to build out compliant environments, b) provides guardrails to prevent engineers/developers from launching resources outside of compliance parameters, and c) provides ongoing documentation about the configuration of infrastructure resources. The toolchains we have already described — templating, configuration management, monitoring — allow us to launch new compliant environments trivially, ensures very limited access to the environment and full documentation on every change. Together, this means a greatly reduced risk of undocumented configuration change, error, or lack of adequate knowledge about where sensitive data lives, and therefore greatly reduced risk of compliance violations.
When systems are complex, there must be an equally powerful set of management tools and processes to enforce and maintain configurations. Continuous compliance is only possible if you treat your infrastructure as code. If your infrastructure can be controlled programmatically, your security and compliance parameters are just pieces of code, capable of being changed more flexibly, versioned in Git like any piece of software, and automated to self-correct errors. This is the future of any type of security in the cloud.
The future of SbD
SbD allows customers to automate the fundamental architecture and, as AWS says,”render[s] non-compliance for IT controls a thing of the past.”
Recent announcements out of AWS re:Invent 2016 are particularly exciting. AWS launched a major update to their EC2 Systems Manager tool, which is a management service that helps you automatically collect software inventory, apply OS patches, create system images, and configure Windows and Linux operating systems. Basically, AWS is filling the gaps in its existing SbD toolchain, stringing together a lot of the controls described above and allowing you to define and track system configurations, prevent drift, and maintain software compliance. Although EC2 Systems Manager was upstaged by several more headline-worthy releases, the service will make a significant difference to compliance teams in the cloud.
In the future, expect AWS and other cloud platforms to launch more comprehensive tools that make it easier for enterprises to achieve SbD in the cloud. The tools currently exist; but assembling these tools into a robust framework can be a challenge for most IT teams. Expect enterprises to turn towards security-focused partners to fill the skills gap.
The post What is Security by Design? appeared first on Gathering Clouds.
Use Multiple Monitors Full Screen with Parallels Desktop for Mac
Whether you’re a fresh adopter of virtual machines or a longtime lover of virtualization, Parallels Desktop 12 for Mac has optimized support for your external monitors and Full Screen mode! You can view your Parallels Desktop VM on your native display or an external monitor so it looks just like it would if you were […]
The post Use Multiple Monitors Full Screen with Parallels Desktop for Mac appeared first on Parallels Blog.