How to run Microsoft Access on a Mac

How to Run Microsoft Access on your Mac: Software developers, data architects and power users whom use Macs have expressed a need to run Microsoft Access. Firstly, if you fall into this category, Parallels Desktop for Mac can assist you with developing application software without reformatting or rebooting your machine. As seen below: Access 2016 […]

The post How to run Microsoft Access on a Mac appeared first on Parallels Blog.

[session] @Midokura to Present #IoT and #EdgeComputing | @ThingsExpo #AI

There are 66 million network cameras capturing terabytes of data. How did factories in Japan improve physical security at the facilities and improve employee productivity? Edge Computing reduces possible kilobytes of data collected per second to only a few kilobytes of data transmitted to the public cloud every day. Data is aggregated and analyzed close to sensors so only intelligent results need to be transmitted to the cloud. Non-essential data is recycled to optimize storage.

read more

The endpoint and VDI: Moving beyond the last piece of the puzzle

There are mixed feelings on the importance of the endpoint’s role in virtual desktop infrastructure (VDI) deployment.  We are currently living in a time where companies are finally starting to make great use and adoption of VDI and more generally of cloud services (on premise and localised resources are still largely adopted).  If VDI and cloud services define the way resources and applications are delivered, there is still a big question mark on the future of the endpoint device. What are the drivers and the essential features to have?  Is the endpoint just a means of accessing the VDI service, or should we expect more from them?

Looking at the current landscape of business endpoints, companies are going through a mixed adoption of devices.  Budget limitation, legacy applications, blurred future and unclear requirement decisions lever more on instant necessity than a long term smart strategy.  With no clear strategic drivers, the current landscape is therefore scattered with all sorts of devices. PCs, laptops, tablets, smartphones, thin and zero clients are creating confusion among users, inefficiency among administrators and are not helping the overall company’s productivity and profitability.

In an era where employees work mostly through technology, it is easy to understand how the endpoint devices can become a vital element to influence or affect business strategy and ultimately business goals (improved productivity, profitability and enhanced security).  In this scenario, two categories of employees are critical to consider:  users and IT administrators.

From one side, companies need to answer to users’ needs of simplicity, user friendliness and improved user experience, while on the other side the administrators’ necessity of security and operational cost reductions have to be considered.  From an endpoint perspective, the key is to provide users with a reliable and user friendly device whereas powerful management and granular monitoring tools are the critical answers for the administrators.

In the modern times, the advent of smartphones and touch-based devices has also changed the expectations of users and the way they use their devices.  Although this affects mainly the consumer market, users are inevitably bringing these expectations to work, forcing IT administrators and manufacturers to come up with solutions that address these needs in the B2B environment.

To further complicate matters, this surge in domestic computing now adds further complexity to the decision-making process, often with the wrong outcome.  Making use of the familiarity of the desktop between home and work devices was once used as a positive; however in business the endpoint device is a tool for the task in hand, so although it has a similar look to a home computer, its purpose is to deliver line of business applications, not home style entertainment.

The endpoint market is no longer driven only by affordable and solid devices and manufacturers need to invest more into research and innovation to lead the market and respond to ever changing requirements.  VDI goes a long way towards re-focusing the purpose of the endpoint, but the initial connection to this centralised service cannot be ignored or dismissed.  This is where endpoint management has to assert itself and assist both users and administrators alike.

Today, companies potentially have a broad spectrum of endpoint devices and to achieve endpoint management requires the use of multiple tools: either Active Directory (or similar) services, frame work management tools or third party utilities. Unfortunately, rather than aiding and improving the workspace, this often complicates and frustrates administrators and users alike.

Thin and zero client is often touted as the perfect solution to VDI access, and although rare, greenfield VDI projects do exist.  Typically, thin clients form part of a bigger estate where a mixture of devices are used for a variety of purposes.  A mixed environment will therefore have old PCs running legacy local applications, laptops for Wi-Fi and roaming connectivity, smartphones and tablets for communication and specific roles with thin clients being used for administration purposes.

Successful thin client implementations have depended on strong management tools and top manufacturers are heavily investing in innovating and extend the capabilities of their software solution to address more of the business estate.  We now are entering the era of “workspace management” where hardware manufacturers are slowly turning into endpoint agnostic, software-based companies to ease the connection to the VDI infrastructure or the cloud based applications required by business.  The goal is therefore to expand on the potential of manageability, simplifying the administrator’s life with single point management solutions and an improved user experience offering a seamless interface across all devices.

The rise of repurposing software and Windows shell replacement solutions is now extending the benefits of thin client management to any device, whilst offering support for legacy applications that cannot be migrated to VDI or the cloud. It is now possible to turn devices, such as PCs and laptops, into powerful, lightweight OS “thin clients” and ultimately provide users with the same experience across all endpoints and administrators with a tool to manage and monitor this diversified workspace.

With these integration methods, the strategy on how to access VDI or cloud-based services can now be addressed. With careful evaluation of the choices currently available on the market, cohesive company standards can now be defined for accessing this new virtual environment. This experience should be pervasive, and not restricted by technology or boundaries, providing a positive impact on a company’s bottom line, driving efficiency in operational costs and improving employee productivity.

The emergence of endpoint agnostic management technologies can put an end to an often-overlooked aspect to deployment, creating opportunities and partially addressing business problems. Once adopted, workspace management will become the ignition point of a flexible way to expand the maximum potential of the endpoint device beyond just being the last piece of the puzzle. 

The endpoint and VDI: Moving beyond the last piece of the puzzle

There are mixed feelings on the importance of the endpoint’s role in virtual desktop infrastructure (VDI) deployment.  We are currently living in a time where companies are finally starting to make great use and adoption of VDI and more generally of cloud services (on premise and localised resources are still largely adopted).  If VDI and cloud services define the way resources and applications are delivered, there is still a big question mark on the future of the endpoint device. What are the drivers and the essential features to have?  Is the endpoint just a means of accessing the VDI service, or should we expect more from them?

Looking at the current landscape of business endpoints, companies are going through a mixed adoption of devices.  Budget limitation, legacy applications, blurred future and unclear requirement decisions lever more on instant necessity than a long term smart strategy.  With no clear strategic drivers, the current landscape is therefore scattered with all sorts of devices. PCs, laptops, tablets, smartphones, thin and zero clients are creating confusion among users, inefficiency among administrators and are not helping the overall company’s productivity and profitability.

In an era where employees work mostly through technology, it is easy to understand how the endpoint devices can become a vital element to influence or affect business strategy and ultimately business goals (improved productivity, profitability and enhanced security).  In this scenario, two categories of employees are critical to consider:  users and IT administrators.

From one side, companies need to answer to users’ needs of simplicity, user friendliness and improved user experience, while on the other side the administrators’ necessity of security and operational cost reductions have to be considered.  From an endpoint perspective, the key is to provide users with a reliable and user friendly device whereas powerful management and granular monitoring tools are the critical answers for the administrators.

In the modern times, the advent of smartphones and touch-based devices has also changed the expectations of users and the way they use their devices.  Although this affects mainly the consumer market, users are inevitably bringing these expectations to work, forcing IT administrators and manufacturers to come up with solutions that address these needs in the B2B environment.

To further complicate matters, this surge in domestic computing now adds further complexity to the decision-making process, often with the wrong outcome.  Making use of the familiarity of the desktop between home and work devices was once used as a positive; however in business the endpoint device is a tool for the task in hand, so although it has a similar look to a home computer, its purpose is to deliver line of business applications, not home style entertainment.

The endpoint market is no longer driven only by affordable and solid devices and manufacturers need to invest more into research and innovation to lead the market and respond to ever changing requirements.  VDI goes a long way towards re-focusing the purpose of the endpoint, but the initial connection to this centralised service cannot be ignored or dismissed.  This is where endpoint management has to assert itself and assist both users and administrators alike.

Today, companies potentially have a broad spectrum of endpoint devices and to achieve endpoint management requires the use of multiple tools: either Active Directory (or similar) services, frame work management tools or third party utilities. Unfortunately, rather than aiding and improving the workspace, this often complicates and frustrates administrators and users alike.

Thin and zero client is often touted as the perfect solution to VDI access, and although rare, greenfield VDI projects do exist.  Typically, thin clients form part of a bigger estate where a mixture of devices are used for a variety of purposes.  A mixed environment will therefore have old PCs running legacy local applications, laptops for Wi-Fi and roaming connectivity, smartphones and tablets for communication and specific roles with thin clients being used for administration purposes.

Successful thin client implementations have depended on strong management tools and top manufacturers are heavily investing in innovating and extend the capabilities of their software solution to address more of the business estate.  We now are entering the era of “workspace management” where hardware manufacturers are slowly turning into endpoint agnostic, software-based companies to ease the connection to the VDI infrastructure or the cloud based applications required by business.  The goal is therefore to expand on the potential of manageability, simplifying the administrator’s life with single point management solutions and an improved user experience offering a seamless interface across all devices.

The rise of repurposing software and Windows shell replacement solutions is now extending the benefits of thin client management to any device, whilst offering support for legacy applications that cannot be migrated to VDI or the cloud. It is now possible to turn devices, such as PCs and laptops, into powerful, lightweight OS “thin clients” and ultimately provide users with the same experience across all endpoints and administrators with a tool to manage and monitor this diversified workspace.

With these integration methods, the strategy on how to access VDI or cloud-based services can now be addressed. With careful evaluation of the choices currently available on the market, cohesive company standards can now be defined for accessing this new virtual environment. This experience should be pervasive, and not restricted by technology or boundaries, providing a positive impact on a company’s bottom line, driving efficiency in operational costs and improving employee productivity.

The emergence of endpoint agnostic management technologies can put an end to an often-overlooked aspect to deployment, creating opportunities and partially addressing business problems. Once adopted, workspace management will become the ignition point of a flexible way to expand the maximum potential of the endpoint device beyond just being the last piece of the puzzle. 

GE and Siemens to tap into cloud for Manufacturing

Cloud is all-pervasive, and we see its presence in almost every sector today. The many benefits that come from it make it a potent technology for even traditional sectors like manufacturing. Industrial titans like GE and Siemens are vying to make the most out of cloud to boost their manufacturing processes and products.

One of the core things they are working on is the Internet of Things (IoT). To refresh on what this means, IoT is a technology that connects everyday devices like your watches, alarm clocks, refrigerators and more to create a complete digital system. Such a system would connect different things to create a smooth data flow, that in turn can make life much easier for its users.

For example, your refrigerator can constantly monitor the level of milk, and if falls below a particular threshold, the system can order milk for through your smartphone app. So, the milk will arrive at home without any effort from your end. That’s the power of technology, and IoT in specific. Since this is an evolving space, there’s a lot of opportunities here, and this is exactly what GE and Siemens wan to tap into.

While some smaller companies are working on specific products, the aim of these big players is to reinvent the manufacturing process as a whole, so individual firms can tap into each stage of the value chain, starting from design to production and maintenance. In other words, both these companies want to create a cloud-based IoT system that will form the backbone of industrial automation, and will provide vast amount of data about everything –  ranging from parts and inventories to the performance of different products.

To achieve such a smart backbone, GE and Siemens are looking to create built-in sensors and protocols that will enable communication between different industrial equipment such as pumps, drones, robots and more. The key aspect is the sensors that will monitor the systems and will send detailed data to the companies that own the system, and using this, they can learn about the health of machines, their performance and more. Along with sensors, platforms are the key to enable communication between different devices.

According to research firm, Markets&Markets, this market could be worth $150 billion within a short span of three years. They key for the success of this market depends to a large extend on the platform that is being used for the data flow. Microsoft has been an early leader in this aspect as it has entered into agreements with both GE and Siemens to use its Azure cloud platform.

Besides Azure, Siemens has officially announced six partnerships and has said that hundreds more are in the pipeline, and all this ensures that industrial automation is a reality soon. GE, too has a lead, as its platform is compatible with most other cloud platforms. Already dozens of companies are building their applications on GE’s platform.

In addition, both these companies are working on their automation by acquiring digital companies that operate in this sphere. Overall, it’s going to be a tight and interesting race that is sure to benefit everyone in the long-run.

The post GE and Siemens to tap into cloud for Manufacturing appeared first on Cloud News Daily.

Why IBM believes quantum computing is the next big cloud hit after AI and blockchain

IBM has released a new API for its Quantum Experience program, which will enable developers to build interfaces between its cloud-based quantum computers and its classical equivalents.

According to Gartner’s most recent hype cycle, from August last year, quantum computing – the process of using quantum-mechanical phenomena, such as entanglement, to perform operations – will take more than 10 years to hit the mainstream.

IBM defines it thus. “While technologies that currently run on classical computers, such as Watson, can help find patterns and insights buried in vast amounts of existing data, quantum computers will deliver solutions to important problems where patterns cannot be seen because the data doesn’t exist and the possibilities that you need to explore to get to the answer are too enormous to ever be processed by classical computers,” the company notes.

Use cases of quantum computing could include improving cloud security through the application of quantum physics, greater modelling of financial data, and making machine learning and artificial intelligence more powerful, IBM added.

According to the Armonk giant, quantum computing is the next cab off the rank to be enhanced through cloud-based platforms after machine learning and blockchain, two technologies much further ahead in Gartner’s cycle.

“IBM has invested over decades to growing the field of quantum computing and we are committed to expanding access to quantum systems and their powerful capabilities for the science and business communities,” said Arvind Krishna, senior vice president of hybrid cloud and director for IBM Research in a statement.

“Following Watson and blockchain, we believe that quantum computing will provide the next powerful set of services delivered via the IBM Cloud platform, and promises to be the next major technology that has the potential to drive a new era of innovation across industries,” Krishna added.

IBM’s record of innovation continues to blaze ahead, with more than 8,000 US patents granted last year, well ahead of nearest competitor Samsung, according to figures released in January. Of that number, around a third were related to artificial intelligence, cognitive computing and cloud computing.

You can find out more here.

Picture credit: “Quantum Computer Interior”, by “IBM Research”, used under CC BY ND

Why IBM believes quantum computing is the next big cloud hit after AI and blockchain

IBM has released a new API for its Quantum Experience program, which will enable developers to build interfaces between its cloud-based quantum computers and its classical equivalents.

According to Gartner’s most recent hype cycle, from August last year, quantum computing – the process of using quantum-mechanical phenomena, such as entanglement, to perform operations – will take more than 10 years to hit the mainstream.

IBM defines it thus. “While technologies that currently run on classical computers, such as Watson, can help find patterns and insights buried in vast amounts of existing data, quantum computers will deliver solutions to important problems where patterns cannot be seen because the data doesn’t exist and the possibilities that you need to explore to get to the answer are too enormous to ever be processed by classical computers,” the company notes.

Use cases of quantum computing could include improving cloud security through the application of quantum physics, greater modelling of financial data, and making machine learning and artificial intelligence more powerful, IBM added.

According to the Armonk giant, quantum computing is the next cab off the rank to be enhanced through cloud-based platforms after machine learning and blockchain, two technologies much further ahead in Gartner’s cycle.

“IBM has invested over decades to growing the field of quantum computing and we are committed to expanding access to quantum systems and their powerful capabilities for the science and business communities,” said Arvind Krishna, senior vice president of hybrid cloud and director for IBM Research in a statement.

“Following Watson and blockchain, we believe that quantum computing will provide the next powerful set of services delivered via the IBM Cloud platform, and promises to be the next major technology that has the potential to drive a new era of innovation across industries,” Krishna added.

IBM’s record of innovation continues to blaze ahead, with more than 8,000 US patents granted last year, well ahead of nearest competitor Samsung, according to figures released in January. Of that number, around a third were related to artificial intelligence, cognitive computing and cloud computing.

You can find out more here.

Picture credit: “Quantum Computer Interior”, by “IBM Research”, used under CC BY ND

Let’s get the network together: Improving lives through AI

We have seen a machine master the complex game of Go, previously thought to be one of the most difficult challenge of artificial processing. We have witnessed vehicles operating autonomously, including a caravan of trucks crossing Europe with only a single operator to monitor systems. We have seen a proliferation of robotic counterparts and automated means for accomplishing a variety of tasks. All of this has given rise to a flurry of people claiming that the AI revolution is already upon us.

Understanding the growth in the functional and technological capability of AI is crucial for understanding the real world advances we have seen. Full AI, that is to say complete, autonomous sentience, involves the ability for a machine to mimic a human to the point that it would be indistinguishable from them (the so-called Turing test). This type of true AI remains a long way from reality. Some would say the major constraint to the future development of AI is no longer our ability to develop the necessary algorithms, but, rather, having the computing power to process the volume of data necessary to teach a machine to interpret complicated things like emotional responses. While it may be some time yet before we reach full AI, there will be many more practical applications of basic AI in the near term that hold the potential for significantly enhancing our lives.

With basic AI, the processing system, embedded within the appliance (local) or connected to a network (cloud), learns and interprets responses based on “experience.” That experience comes in the form of training through using data sets that simulate the situations we want the system to learn from. This is the confluence of machine learning (ML) and AI. The capability to teach machines to interpret data is the key underpinning technology that will enable more complex forms of AI that can be autonomous in their responses to input. It is this type of AI that is getting the most attention. In the next ten years, the use of this kind of ML-based AI will likely fall into two categories:

  • Improvement and automation of daily life: Managing household tasks, self-driving cars and trucks and the general automation of tasks that robots can perform significantly faster and more reliably than humans;
  • Exploration and development of new trends and insights: Artificial intelligence can help accelerate the rate discovery and science happening worldwide every day. The use of AI to automate science and technology will drive our ability to discover new cures, technologies, tools, cells, planets, etc., ultimately pushing artificial intelligence itself to new heights.

There is no doubt about the commercial prospects for autonomous robotic systems for applications like online sales conversion, customer satisfaction, and operational efficiency. We see this application already being advanced to the point that it will become commercially viable, which is the first step to it becoming practical and widespread. Simply put, if revenue can be made from it, it will become self-sustaining and thus continue to grow. The Amazon Echo, a personal assistant, has succeeded as a solidly commercial application of autonomous technology in the United States.

In addition to the automation of transportation and logistics, a wide variety of additional technologies that utilise autonomous processing techniques are being built. Currently, the artificial assistant or “chatbot” concept is one of the most popular. By creating the illusion of a fully sentient remote participant, it makes interaction with technology more approachable. There have been obvious failings of this technology (the unfiltered Microsoft chatbot, “Tay,” as a prime example), but the application of properly developed and managed artificial systems for interaction is an important step along the route to full AI. This is also a hugely important application of AI as it will bring technology to those who previously could not engage with technology completely for any number of physical or mental reasons. By making technology simpler and more human to interact with, you remove some of the barriers to its use that cause difficulty for people with various impairments.

The use of AI for development and discovery is just now beginning to gain traction, but over the next decade, this will become an area of significant investment and development. There are so many repetitive tasks involved in any scientific or research project that using robotic intelligence engines to manage and perfect the more complex and repetitive tasks would greatly increase the speed at which new breakthroughs could be uncovered.

Let’s get the network together: Improving lives through AI

We have seen a machine master the complex game of Go, previously thought to be one of the most difficult challenge of artificial processing. We have witnessed vehicles operating autonomously, including a caravan of trucks crossing Europe with only a single operator to monitor systems. We have seen a proliferation of robotic counterparts and automated means for accomplishing a variety of tasks. All of this has given rise to a flurry of people claiming that the AI revolution is already upon us.

Understanding the growth in the functional and technological capability of AI is crucial for understanding the real world advances we have seen. Full AI, that is to say complete, autonomous sentience, involves the ability for a machine to mimic a human to the point that it would be indistinguishable from them (the so-called Turing test). This type of true AI remains a long way from reality. Some would say the major constraint to the future development of AI is no longer our ability to develop the necessary algorithms, but, rather, having the computing power to process the volume of data necessary to teach a machine to interpret complicated things like emotional responses. While it may be some time yet before we reach full AI, there will be many more practical applications of basic AI in the near term that hold the potential for significantly enhancing our lives.

With basic AI, the processing system, embedded within the appliance (local) or connected to a network (cloud), learns and interprets responses based on “experience.” That experience comes in the form of training through using data sets that simulate the situations we want the system to learn from. This is the confluence of machine learning (ML) and AI. The capability to teach machines to interpret data is the key underpinning technology that will enable more complex forms of AI that can be autonomous in their responses to input. It is this type of AI that is getting the most attention. In the next ten years, the use of this kind of ML-based AI will likely fall into two categories:

  • Improvement and automation of daily life: Managing household tasks, self-driving cars and trucks and the general automation of tasks that robots can perform significantly faster and more reliably than humans;
  • Exploration and development of new trends and insights: Artificial intelligence can help accelerate the rate discovery and science happening worldwide every day. The use of AI to automate science and technology will drive our ability to discover new cures, technologies, tools, cells, planets, etc., ultimately pushing artificial intelligence itself to new heights.

There is no doubt about the commercial prospects for autonomous robotic systems for applications like online sales conversion, customer satisfaction, and operational efficiency. We see this application already being advanced to the point that it will become commercially viable, which is the first step to it becoming practical and widespread. Simply put, if revenue can be made from it, it will become self-sustaining and thus continue to grow. The Amazon Echo, a personal assistant, has succeeded as a solidly commercial application of autonomous technology in the United States.

In addition to the automation of transportation and logistics, a wide variety of additional technologies that utilise autonomous processing techniques are being built. Currently, the artificial assistant or “chatbot” concept is one of the most popular. By creating the illusion of a fully sentient remote participant, it makes interaction with technology more approachable. There have been obvious failings of this technology (the unfiltered Microsoft chatbot, “Tay,” as a prime example), but the application of properly developed and managed artificial systems for interaction is an important step along the route to full AI. This is also a hugely important application of AI as it will bring technology to those who previously could not engage with technology completely for any number of physical or mental reasons. By making technology simpler and more human to interact with, you remove some of the barriers to its use that cause difficulty for people with various impairments.

The use of AI for development and discovery is just now beginning to gain traction, but over the next decade, this will become an area of significant investment and development. There are so many repetitive tasks involved in any scientific or research project that using robotic intelligence engines to manage and perfect the more complex and repetitive tasks would greatly increase the speed at which new breakthroughs could be uncovered.

An Overview of DDoS Attacks | @CloudExpo #Cloud #Security #DataCenter

Powerful Denial of Service attacks are becoming increasingly common. A Distributed Denial of Service attack is when the attacker uses multiple machines to flood the resources of the target to overwhelm it and deny the legitimate users access to the service. The DDoS attack on Dyn in October 2016 was one of most powerful attacks in history. Many DDoS attacks can be thwarted to a large extent by increasing the system’s capacity during an attack, but that is not a solution because it still causes monetary losses.

read more