All posts by Praharsha Anand

Google launches Meet Progressive Web App


Praharsha Anand

3 Aug, 2021

Earlier this year, Google revealed it was testing pre-installed Meet and Chat web apps on Chrome OS and planned to release them to the public. Delivering on that promise, Google announced Meet is now a progressive web app (PWA).

PWAs are responsive websites that look and feel like native mobile apps.

Google further stated the PWA version of Meet has the same features as its app counterpart, except it is easier to use and far more accessible. The Meet icon will now appear on users’ shelves and launchers, providing easy access to video chat. As with other PWAs, Google Meet will update automatically during Chrome updates.

“We’ve launched a new Google Meet standalone web app. This Progressive Web Application (PWA) has all the same features as Google Meet on the web, but as a standalone app it’s easier to find and use, and it streamlines your workflow by eliminating the need to switch between tabs,” explained Google.

Users can find the PWA installation prompt on the top-right corner of Chrome’s address or URL bar. Once downloaded, Meet will load into a standalone window. Users can run Google Meets PWA on Windows, macOS, Chrome OS 73 and up, and Linux.

The Google Meets service is available to anyone with a Google account, including G Suite Basic and Business customers. Administrators can manage PWA access or automatically install progressive web apps for users.

Among the Meet software updates are cross-domain live streaming, live stream captions, and hand raise updates for desktops and laptops.

Google has confirmed the Google Meets PWA will arrive starting today, but some features could take up to 15 business days to appear.

AWS introduces fully managed Fault Injection Simulator


Praharsha Anand

16 Mar, 2021

Amazon Web Services (AWS) has announced Fault Injection Simulator, a fully managed service for running controlled experiments on AWS. 

Primarily used in chaos engineering, fault injection experiments subject applications to sudden stress, allowing engineering teams to observe how systems respond and implement improvements accordingly. 

According to AWS, its new Fault Injection Simulator makes it easy for teams to monitor and inspect blind spots, performance bottlenecks, and other unknown vulnerabilities unidentified by conventional tests. 

The tool comes with pre-built experiment templates that enable teams to gradually or simultaneously impair distinct applications’ performance in a production environment. For convenience, the simulator also provides controls and guardrails so teams can automatically roll back or stop the experiment when specific conditions are met. 

What’s more, the simulator allows teams to create disruptive experiments across a range of AWS services, including Amazon EC2, Amazon EKS, Amazon ECS, and Amazon RDS. Teams can also run “GameDay scenarios or stress-test their most critical applications on AWS at scale,” said AWS. 

For best results, AWS recommends enterprises integrate its simulator into their continuous delivery pipeline. Steadfast integration will enable teams to monitor and unearth production vulnerabilities constantly, improving application performance, observability, and resiliency.

“With a few clicks in the console, teams can run complex scenarios with common distributed system failures happening in parallel or building sequentially over time, enabling them to create the real world conditions necessary to find hidden weaknesses,” said AWS.

“nClouds is adding advanced chaos engineering capabilities and service offerings to our DevOps practice that will improve the resiliency of distributed service architectures we build for our customers and prove regulatory compliance,” comments Marius Ducea, VP of DevOps practice at nClouds.

“AWS Fault Injection Simulator has a deep level of fault injection that will enable us to create failure scenarios that more accurately reflect real-world events. With this capability, we expect to have an even better perspective on the expected time to recovery during real events.”

Dell and Faction debut multi-cloud backup


Praharsha Anand

9 Mar, 2021

Dell and Faction have announced new multi-cloud storage and data protection solutions for enterprises looking to monitor their critical data from a centralised location.

“Protecting against ransomware and other cyber attacks is quickly rising in importance for both IT and business executives,” said Joe CaraDonna, CTO of public cloud and APEX offerings at Dell Technologies.

“With the average cost of an attack being $13m, it’s easy to see why this is so concerning. As IT deployments become more complex, with on-prem deployments combined with applications running in multiple public clouds, it is hard to build a world class centralized solution to protect critical data from the modern threat of sophisticated cyberattacks… until today.”

Dell’s Cloud PowerProtect for Multi-cloud is a fully managed service that allows users to safeguard their data in multiple clouds from a single location. Building on this solution, Dell has announced a new security capability called Dell EMC PowerProtect Cyber Recovery. 

Enhanced PowerProtect for Multi-cloud will enable customers to physically and logically isolate their critical data in an air-gapped cloud Cyber Recovery vault in a Faction-powered data center. Data immutability and CyberSense intelligent analytics are also included in the service to ensure enterprise-grade security. In the event of a cyber attack, users can easily move data from the vault to their data center choice, including AWS, Azure, Google Cloud, or Oracle Cloud. 

What’s more, using Superna Eyeglass DR Manager with PowerScale for Multi-cloud, customers can mirror data from their data centers to Faction’s cloud-adjacent center. Users can also choose to recover and save applications in Faction’s data center or any other public cloud listed under Dell’s offerings. 

Superna Eyeglass DR Manager’s other interesting capabilities include one-button failover, flexible SyncIQ scheduling, continuous readiness monitoring, disaster recovery (DR) testing, data loss exposure analysis, and reporting.

According to IT analyst research and validation agency Enterprise Strategy Group, PowerScale for Multi-cloud lowers storage costs by up to 89%. The solution can also reach up to 2Tbps multi-cloud throughput via multiprotocol data access on PowerScale for network file system (NFS), server message block (SMB), Hadoop distributed file system (HDFS), and Amazon’s simple storage service (S3).

“The future of IT is hybrid – a world that balances the right public cloud services with the right on-premises infrastructure to provide the performance, scale, functionality and control required of modern applications and development paradigms. As customers consider this hybrid model, it is important to take a data-first approach,” added CaraDonna.

IBM and Palantir debut no-code platform for OpenAI applications


Praharsha Anand

9 Feb, 2021

IBM and Palantir have announced a jointly developed product for AI applications called Palantir for IBM Cloud Pak for Data. 

Built on Red Hat OpenShift, the new platform offers a no-code/low-code environment for building and deploying AI-based applications. 

“Today, nearly 75% of businesses surveyed in an IBM sponsored report say they are exploring or implementing AI. However, 37% cited limited AI expertise and 31% cite increasing data complexities and silos as barriers to successful adoption,” said IBM.

Palantir for IBM Cloud Pak for Data leverages Palantir Foundry data operations platform and IBM Cloud Pak for Data services, such as Watson, to help users access, analyze, and act on extensive data spread across hybrid cloud environments. The platform also enables businesses to reduce data silos and monitor data throughout the AI lifecycle, starting from data scooping and model building to analytics and full-fledged enterprise AI deployment. 

What’s more, businesses can choose to work with IBM Data Science and AI Elite team to address AI adoption challenges and handle any data science use cases.

The offering targets enterprises looking to include AI-based applications to automate tasks and processes for large quantities of data, resulting in informed, data-driven decision-making.

For instance, Palantir for IBM Cloud Pak for Data provides retailers with increased visibility and transparency by integrating data across their operational silos, enabling vendors and distributors to proactively monitor supply-chain health in real-time. 

In the finance sector, Palantir for IBM Cloud Pak for Data helps with high-volume data integration, deduplication, and mapping to a common data model, ensuring an aggregated and consistent single customer view (SCV).

Also primed for telecommunications, Palantir for IBM Cloud Pak for Data connects data from suppliers, CRM applications, sales orders, and production data with AI models for campaign optimization and attrition prediction/prevention to enhance customer care and add value across multiple business objectives.

“Our clients deliver products and services while operating in some of the most complex, fast-changing industries of the world,” said Rob Thomas, senior vice president, cloud and data platform, IBM. “Together, IBM and Palantir aim to make it easier than ever for businesses to put AI to work and become data-driven throughout their operations.”

Accenture and Salesforce join forces to bring sustainability to organisations’ front offices


Praharsha Anand

27 Jan, 2021

Accenture and Salesforce have expanded their partnership to help companies embed sustainability into their business, meet growing end-user demand for data-based insights and accelerate towards United Nations’ sustainable development goals (SDGs).

The new partnership also focuses on providing the C-suite with true visibility into their company’s historical and real-time environmental, social, and governance (ESG) data. Organisations will track, measure, and act on a range of sustainability initiatives, including reporting on carbon usage, supporting customer engagements, creating positive consumer experiences, meeting regulatory requirements, and developing new business models.

“Every CEO is recognising their responsibilities don’t stop at the edge of the corporate campus or Zoom,” said Marc Benioff, chairman and CEO at Salesforce. 

“By integrating sustainability deep into the fabric of our companies, our businesses will become more successful, our communities more equal, our societies more just and our planet healthier. We’re incredibly proud to be working with Accenture to help customers more readily drive sustainability programs that benefit all stakeholders and create business value.”

Salesforce Sustainability Cloud helps businesses measure and manage their carbon footprint by offering a 360-degree view of their corporate environmental impact. It also helps organisations transparently report their investor-grade climate data. As part of their new collaboration, Accenture will leverage Salesforce’s Sustainability Cloud to develop sustainability insights that can scale across organisations and their ecosystems.

“Climate change continues to be one of the most critical challenges facing business and the broader planet,” adds George Oliver, CEO of Johnson Controls. 

“We are pleased to be working with Salesforce and Accenture in accelerating sustainability activities for JCI, for our customers and our communities, especially as momentum for action continues to grow.”

Later this year, Accenture and Salesforce will work together to expand their combined platform and services to track and analyse broader ESG metrics, from water and waste management to diversity and inclusion.

Nokia and Google to co-develop cloud-native 5G solutions


Praharsha Anand

19 Jan, 2021

Google Cloud and Nokia are teaming up to develop cloud-native 5G solutions for communications service providers and enterprise customers.

The new partnership also focuses on modernising network infrastructures and developing network edge as a business services platform for enterprises. Furthermore, the companies will co-innovate solutions to help CSPs deliver 5G connectivity and services at scale.

“Communications service providers have a tremendous opportunity ahead of them to support businesses’ digital transformations at the network edge through both 5G connectivity and cloud-native applications and capabilities,” said George Nazi, VP, telco, media and entertainment industry solutions at Google Cloud. 

“Doing so requires modernized infrastructure, built for a cloud-native 5G core, and we’re proud to partner with Nokia to help the telecommunications industry expand and support these customers.”

As part of their strategic collaboration, Nokia will integrate its voice core, cloud packet core, network exposure function, data management, signalling, and 5G core technologies into Google’s services. Nokia will also include its IMPACT IoT Connected Device Platform, which allows for the remote management of IoT devices, and Converged Charging solution for real-time rating and charging capabilities.

Google Cloud’s Anthos will serve as the platform for deploying applications, enabling CSPs to build services across the network edge, carrier networks, and public or private clouds. What’s more, by delivering cloud-native applications at the edge, businesses can lower network latency and eliminate the need for costly, on-site infrastructure. 

Nokia cloud and network services CTO Ron Haberman adds, “In the past five years, the telecom industry has evolved from physical appliances to virtual network functions and now cloud-native solutions.

“Nokia is excited to work with Google Cloud in service of our customers, both CSPs and enterprise, to provide choice and freedom to run workloads on premise and in the public cloud. Cloud-native network functions and automation will enable new agility and use-cases in the 5G era.”

Google Meet will help troubleshoot a low-quality video conference


Praharsha Anand

14 Jan, 2021

Google has announced the addition of new troubleshooting tools to its video conferencing solution, Meet.

According to Google, the new tools will make it easier for end-users to understand how their desktop and network environments affect Meet’s video quality. Available by default during a call, users can access the tools by selecting “Troubleshooting and Help” in the three-dot menu.  

Under the “Troubleshooting” section, users can browse real-time charts depicting network stability and CPU load. The network stability graph shows any connection delay in milliseconds, and the system load chart lets users track Google Meet’s CPU usage over the last five minutes. Together, the graphs provide greater visibility into how Google Meet, their computer, and their network are performing. 

The menu also provides users with general suggestions to improve call performance and gives real-time feedback on the impact any action the user takes has on the network and processing load. Plus, it offers tips for performing various tasks, such as presenting content and recording meetings. 

“Meet shares processing power and network connections with all other applications and browser tabs running on a computer. When the system is overusing its processing power or suffering from a bad network connection, Meet will try to adjust and maintain performance while consuming fewer resources. Some of those adjustments are less visible, but if resource shortages are severe or persistent, users may notice blurry video, stuttering audio, or other issues,” explained Google

Lastly, Meet’s troubleshooting window highlights time segments, enabling users to know when a local environment likely affected the call quality the most. 

Google Meet’s “Troubleshooting” rollout has started for Google Workspace Essentials, Business Starter, Business Standard, Business Plus, Enterprise Essentials, Enterprise Standard, and Enterprise Plus. It’s also available for G Suite Basic, Business, Education, Enterprise for Education, and nonprofit customers. Keep in mind, this is a staged rollout that could take 15 days to reach all users. 

HITRUST partners with AWS and Microsoft to clarify shared responsibility in cloud security


Praharsha Anand

13 Jan, 2021

The Health Information Trust Alliance (HITRUST) has announced the release of its new Shared Responsibility Matrix program to help cloud vendors better communicate their security and privacy assurances.

Developed in collaboration with Amazon Web Services (AWS) and Microsoft Azure, HITRUST’s Shared Responsibility Matrices clearly define security and privacy responsibilities between cloud service providers and their customers, streamlining processes for risk management programs.

Furthermore, the HITRUST Shared Responsibility Matrix for AWS and the HITRUST Shared Responsibility for Microsoft Azure align perfectly with each cloud service provider’s unique solution offering.

“Leading cloud service providers have long supported shared responsibility models, whereby the provider assumes some security responsibility for hosting applications and systems, while the organization deploying its solutions in the cloud assumes partial or shared responsibility for others,” said HITRUST. 

“The challenge, however, is that many shared responsibility models are loosely defined and vary based on the solution. For businesses deploying solutions in the cloud, this ambiguity creates an added layer of complexity related to achieving broader risk management objectives.”

HITRUST’s new shared responsibility model for cloud security is a part of HITRUST’s Shared Responsibility and Inheritance Program, which was introduced in 2018 to address the many misunderstandings, risks, and complexities organizations face when engaging with their cloud service providers.

“HITRUST launched this Program with the goal of providing greater clarity regarding the ownership and operation of security controls between organizations and their cloud service providers,” said Becky Swain, director of standards and shared responsibility program lead, HITRUST.

Swain continued, “The introduction of the Shared Responsibility Matrix is another HITRUST resource that underscores our ongoing commitment to simplifying and enhancing offerings to address our customers’ most pressing risk management challenges.”

Lastly, HITRUST announced its information risk management platform MyCSF can now inherit controls from AWS and Microsoft Azure. According to the company, the ability to automatically inherit controls helps save time, money, and resources as organizations pursue their risk management and compliance objectives.

Microsoft will soon offer 99.99% uptime for Azure Active Directory


Praharsha Anand

6 Jan, 2021

Starting April 1, Microsoft plans to update its service level agreement (SLA) for Azure AD user authentication to 99.99%. Hitting this four-nine uptime will be an improvement over the current 99.9% SLA.

A multi-tenant identity management service, Azure AD processes tens of billions of authentications per day. To deliver on its ‘99.99% uptime’ promise, Microsoft aims to drop service credit for administrative features and include only vital user authentication and federation features under Azure AD’s new SLA. 

Any period of time when users can’t log in to the service, access applications on the Access Panel and reset passwords, accounts to service downtime. Furthermore, organisations can avail service credits if Azure AD’s uptime drops below the SLA. For instance, Microsoft offers a full-service credit when uptime falls below 95% per month.

Microsoft attributed the enhanced SLA to its ongoing program of resilience investment to improve reliability in all areas of its identity services. 

To increase the reliability of its Azure AD, Microsoft centralised architecture to scope and isolated the impact of failures to a minimum number of users; included a backup authentication service that transparently and automatically handles authentications for participating workloads; integrated Azure infrastructure authentication with regional authentication endpoints; and provided instant enforcement of policy changes with continuous access evaluation (CAE) protocol for critical Microsoft 365 services. 

“In conversations with our customers, we learned that the most critical promise of our service is ensuring that every user can sign in to the apps and services they need without interruption,’’ said Nadim Abdo, vice president of engineering at Microsoft.

“To deliver on this promise, we are updating the definition of Azure AD SLA availability to include only user authentication and federation (and removing administrative features). This focus on critical user authentication scenarios aligns our engineering investments with the vital functions that must stay healthy for customers’ businesses to run.”

Microsoft to offer top-secret cloud platform for classified data


Praharsha Anand

9 Dec, 2020

Microsoft has announced the launch of its newest cloud offering: Azure Government Top Secret.

The new cloud service expands Microsoft’s tactical edge portfolio for the US government, including Azure (public cloud), Azure Government, and Azure Government Secret. Microsoft has tailored Azure Government Top Secret for its US government customers that work with classified information.

“Azure Government Top Secret provides the same capabilities as the commercial version of Azure, Azure Government and Azure Government Secret to enable a continuum of compute from mission cloud to tactical edge,” said Tom Keane, corporate vice president of Azure global.

“The broad range of services will meet the demand for greater agility in the classified space, including the need to gain deeper insights from data sourced from any location as well as the need to enable the rapid expansion of remote work.”

According to reports, Microsoft is working with the US government to secure accreditation for its new cloud. In the meantime, it’s already completed the build out of the Azure Government Top Secret regions. 

This announcement comes amid ongoing court battles over the Department of Defense’s (DOD) $10 billion Joint Enterprise Defense Infrastructure (JEDI) cloud contract. The DOD awarded the whole contract to Microsoft, bypassing Amazon Web Services and spurring the company to launch a lawsuit

Microsoft also announced enhancements to its Azure Government Secret service, authorized and actively used by the US Department of Defense, law enforcement, and other agencies. 

According to Microsoft, Azure Government Secret will now include Azure Kubernetes Service (AKS) and Azure Container Instances. The additions aim to help application developers deploy and manage containerized applications more easily.

Intelligent security analytics services Azure Sentinel and Azure Security Center are also now available in Azure Government Secret, enabling unified security across digital estates and facilitating proactive threat management.

“The consistency between Azure (commercial), Azure Government, and Azure Government Secret is also starting to change the game as software development may happen from anywhere, while the code itself can be promoted to enclaves with higher classification levels. There it can interact with data of higher classification levels. At the end of the day, this means doing more for the mission at a lower overall cost,” said Carroll Moon, CTO of CloudFit Software.