Archivo de la categoría: Guest Post

CyberOam Provides Critical Insight for Virtual Datacenter Administrators

Guest Post by Natalie Lehrer, a senior contributor for CloudWedge.

Organizations must provide reliable technical resources in order to keep a business running in an efficient manner. Network security is one of the chief concerns of all companies regardless of size. Although corporations are often pressed to earn profits, the need to protect all company related data at any cost should be a top priority.

Virtual datacenters can be susceptible to a variety of threats including hyperjacking, DoS attacks and more. The importance of keeping up to date on the latest server patches, security bulletins and being aware of the latest malware threats is more important than ever. Therefore, it is critical that all incoming network traffic is properly scanned in search of viruses and malicious code that could possibly corrupt or cause the malfunction of the virtual datacenter.

What is the Solution?

Network appliances such as Cyberoam can act as a unified threat management suite. In addition, Cyberoam scans as all incoming and outgoing traffic while producing detailed reports for system administrators. These granular reports list all virtual datacenter activity while providing logs that give forensic computer scientists direction on where to focus their investigations. Since any activities performed on virtual servers can be retained using Cyberoam, the audit process can provide a clear trail which will lead you to the culprit incase of a data breach. Cyberoam is not a reactive solution. Cyberoam proactively scans all incoming and outgoing data incase viruses and other harmful programs try to compromise and corrupt your entire virtual datacenter.

Security intricacies include intrusion protection services, specialized auditing applications and robust firewall features. Firewalls play an important role in keeping all harmful material from compromising virtual servers. Firewalls essentially block intruders while simultaneously allowing legitimate TCP or UDP packets to enter your system. Cyberoam allows administrators the ability to easily construct firewall rules that keep internal data safe and secure.

When you setup your virtual datacenter, it is important to utilize all of the features at your disposal. Sometimes the most obscure features are the most valuable. The best way to keep your virtual datacenter is safe is be on top of the latest knowledge. There have been reports that many IT professionals find themselves intimidated by new technology simply have not taken the initiative to learn all about the latest datacenter hardware and software available to them today. If you are trying to stay one step ahead of the game, your best bet is to learn all about the tools on the market and make your decision accordingly. Be sure to scrutinize any appliance you decide to utilize inside of your datacenter before adding it into your arsenal of IT weaponry.

Headshot

Natalie Lehrer is a senior contributor for CloudWedge.

In her spare time, Natalie enjoys exploring all things cloud and is a music enthusiast.

Follow Natalie’s daily posts on Twitter: @Cloudwedge, or on Facebook.

Think Office 365 is a Maintenance-Free Environment? Not So Fast …

Guest Post by Chris Pyle, Champion Solutions Group

So you’ve made the move to Office 365. Great!

You think you’ve gone from worrying about procuring exchange hardware and storage capacity, being concerned about email recovery plans, and having to keep up with the constant maintenance of your exchange server farm and the backing up your data, to relying on Office 365 to provide virtually anywhere-access to Microsoft tools.

Sounds pretty good, and we won’t blame you if you’re thinking that your move to the cloud has just afforded you a maintenance-free environment, but not so fast.

While the cost-savings and convenience it may seem like a no-brainer, what many administrators often forget is that the cloud itself doesn’t make email management any easier – there are still a ton of tasks that need to be done to ensure usability and security.

Indeed while moving mailboxes to the cloud may be efficient and provide cost savings, it doesn’t mean administration ends there. Not by any means.

Not to worry, for starters Office 365 admins looking for a faster and easier way to handle mail administration tasks have a number of tools at their disposal, such as our 365 Command by MessageOps. 365Command replaces the command line interface of Windows® PowerShell with a rich, HTML5 graphical user interface that is easy to navigate and makes quick work of changing mailbox settings, monitoring usage and reporting (and did we say you don’t need to know PowerShell?).

From our users who manage about 1 million mail boxes we see the most effective 365 administrators break down maintenance and tasks into daily, weekly, monthly, and quarterly buckets. Breaking down tasks this way simplifies work-flow, and the best part is that this can be easily implemented into your routine and should heighten the value and success utilizing Office 365.

Here are best practices for getting started:

Daily: Mailbox Administrators are constantly responding to any addition, change, and removal requests for their Office365 accounts. The most common are daily tasks that are quickly resolved, for example “forgot my password”, “need access to folder X”, “executive Y is on maternity leave, can you forward her files”, and so on:

  1. Modifying Passwords

  2. Modifying Folder Permissions

  3. Mailbox Forwarding

  4. Creating Single and Shared Mailboxes

Weekly: Weekly task groupings are geared toward helping Administrators keep a watchful eye on growth and scalability, security, speed and access. For example, checking for new devices that are being added to mailboxes, comparing them from previous weeks, and verifying that the user did indeed add a new device, and not incurring a potential risk of theft or fraud:

  1. Review Top Mailbox Growth by Size

  2. Review Office 365 Audit Logs

  3. Review Mobile Security

  4. Review Shared Mailbox Growth- (shared mailboxes only have 10GB limit!)

  5. Review the exact location of their servers and their mailboxes within the Microsoft data centers

Monthly: OK, now you’re cooking with gasoline — with those annoying daily tasks and cumbersome weekly tasks out of the way, top-level Administrators turn their full attention to security and access, which we can never have a lapse in attention:

  1. They run reports and lists of all users last login date. They are checking for people who may no longer be employed with the company, thus eliminating the need for that mailbox and its associated cost from Microsoft. Or if there is limited use, they could move the end user to a less expensive Office 365 SKU, again reducing their overall O365 costs.

  2. From a security standpoint, they are running reports to see who is forwarding their mailboxes to external mailboxes, such as sending their email to their home email account (Gmail/Yahoo/ Hotmail, etc.)

  3. Review password strength and the passwords that are set to expire on a monthly basis, ensuring their mailboxes are safe and secure.

  4. Review mailbox permissions, and review who has Send As privileges in their organization. They are confirming with the end user that they allowed these people to have the ability to send email as them.

  5. Review which employees have Full Mailbox access privileges. They confirm with the end user that they do want those additional users to have full access to their mail and calendar.

Quarterly: See how easy this is now? You’ve cleared out the clutter, and made sure every box on the system is secure. You’ve taken the steps to keep the system running fast and true, with consistent access and performance across the enterprise. Now kick back, light a fat stogie and do some light clean up and maintenance:

  1. Group Clean Up, review all email groups to ensure they have active members, as well as review which groups have people in them that are no longer employed, or contractors that are no longer involved, which groups aren’t being utilized, etc.

  2. Review the Edit Permissions list.

  3. Review Non Password changes in 90 days.

Conclusion

Just because you’ve moved to the cloud it doesn’t mean management and maintenance of your mail boxes stops there. Many of these best-practices would require the knowledge of PowerShell, but who wants to deal with that? Save yourself lots of trouble and find a tool that will manage these activities, streamline your work-flow and jump-start your productivity.

Chris Pyle headshot

Christopher Pyle is President & CEO for Champion Solutions Group. He is also an active member of Vistage International, an executive leadership organization, and is a Distinguished Guest Lecturer at Florida Atlantic University’s Executive Forum Lecture Series.

Google, Amazon Outages a Real Threat For Those Who Rely On Cloud Storage

Guest Post by Simon Bain, CEO of SearchYourCloud.

It was only for a few minutes, however Google was down. This follows hot on the heels of the other major cloud provider Amazon being down for a couple of hours earlier in August. This even relatively short outage could be a real problem for organizations that rely on these services to store their enterprise information. I am not a great lover of multi-device synchronization, I mean all those versions kicking around your systems! However if done well, it could be one of the technologies that help save ‘Cloud Stores’ from the idiosyncrasies of the Internet and a connected life.

We seem to be currently in the silly season of outages, with Amazon, Microsoft and Google all stating that their problems were cause by a switch being replaced or an update going wrong.

These outages may seem small for the supplier. But they are massive for the customer, who is unable to access sales data or invoices for a few hours.

This however, should not stop people using these services. But it should make them shop around, and look at what is really on offer. A service that does not have synchronization may well sound great. But if you do not have a local copy of your document on the device that you are actually working on, and your connection goes down, for whatever reason, then your work will stop.

SearchYourCloud Inc. has recently launched SearchYourCloud, a new application that enables people to securely find and access information stored in Dropbox, Box, GDrive, Microsoft Exchange, SharePoint or Outlook.com with a single search. Using either a Windows PC or any IOS device, SearchYourCloud will also be available for other clouds later in the year.

SearchYourCloud enables users to not only find what they are searching for, but also protects their data and privacy in the cloud.

Simon Bain

Simon Bain is Chief Architect and CEO of SearchYourCloud, and also serves on the Board of the Sun Microsystems User Group.

OpenStack’s Third Birthday – a Recap with a Look into the Future

Guest Post By Nati Shalom, CTO and Founder of GigaSpaces

OpenStack was first announced three years ago at the OSCON conference in Portland. I remember the first time I heard about the announcement and how it immediately caught my attention. Ever since that day, I have become a strong advocate of the technology. Looking back, I thought that it would be interesting to analyze why.

Is it the fact that it’s an open source cloud? Well partially, but that couldn’t be the main reason. OpenStack was not the first open source cloud initiative; we had Eucalyptus, then later Cloud.com and other open source cloud initiatives before OpenStack emerged.

There were two main elements missing from these previous open source cloud initiatives: the companies behind the initiatives and the commitment to a true open movement. It was clear to me that a true open source cloud movement could not turn into an industry movement, and thus meet its true potential if it was led by startups. In addition, the fact that companies whose businesses run cloud services, such as Rackspace, brought its own experience in the field and a large scale consumer of such infrastructure such as NASA, gave OpenStack a much better starting point. Also, knowing some of the main individuals behind the initiatives and their commitment to the Open Cloud made me feel much more confident that the OpenStack project would have a much higher chance for success than its predecessors. Indeed, after three years, it is now clear that the game is essentially over and it is apparent who is going to win the open source cloud war. I’m happy to say that I also had my own little share in spreading the word by advocating the OpenStack movement in our own local community which also grew extremely quickly over the past two years.

OpenStack as an Open Movement

Paul Holland, an Executive Program Manager for Cloud at HP, gave an excellent talk during the last OpenStack Summit, comparing the founding of the OpenStack Foundation to the establishment of the United States. Paul drew interesting parallelization between the factors that brought a group of thirteen individual states to unite and become the empire of today, with that of OpenStack.

OpenStack1

Paul also drew an interesting comparison between the role of the common currency that fostered the open market and trade between the different states with its OpenStack equivalent: APIs, common language, processes, etc. Today, we take those things for granted, but the reality is that common currency isn’t yet trivial in many countries even today, yet we cannot imagine what our global economy would look like without the Dollar as a common currency or English as a common language, even if they have not been explicitly chosen as such by all countries.

OpenStack2

As individuals, we often tend to gloss over the details of the Foundation and its governing body, but it is those details that make OpenStack an industry movement that has brought many large companies, such as Red Hat, HP, IBM, Rackspace and many others (57 in total as of today), to collaborate and contribute to a common project as noted in this report. Also, the fact that the number of individual developers has been growing steadily year after year is another strong indication of the real movement that this project has created.

OpenStack3

Thinking Beyond Amazon AWS

OpenStack essentially started as the open source alternative to Amazon AWS. Many of the sub-projects often began as Amazon equivalents. Today, we are starting to see projects with a new level of innovation that do not have any AWS equivalent. The most notable one IMHO is the Neutron (network) and BareMetal projects. Both have huge potential to disrupt how we think about cloud infrastructure.

Only on OpenStack

We often tend to compare OpenStack with other clouds on a feature-to-feature basis.

The open source and community adoption nature of OpenStack enables us to do things that are unique to OpenStack and cannot be matched by other clouds. Here are a few examples:

  • Run the same infrastructure on private and public clouds.
  • Work with multiple cloud providers; have more than one OpenStack-compatible cloud provider with which to work.
  • Plug in different HW as cloud platforms for private clouds from different vendors, such as HP, IBM, Dell, Cisco, or use pre-packaged OpenStack distributions, such as the one from Ubuntu, Red Hat, Piston etc.
  • Choose your infrastructure of choice for storage, network etc, assuming that many of the devices come with OpenStack-supported plug-ins.

All this can be done only on OpenStack; not just because it is open source, but primarily because of the level of adoption of OpenStack that has made it the de-facto industry standard.

Re-think the Cloud Layers

When cloud first came into the world, it was common to look at the stack from a three-layer approach: IaaS, PaaS and SaaS.

Typically, when we designed each of the layers, we looked at the other layers as *black-boxes* and often had to create parallel stacks within each layer to manage security, metering, high availability etc.

The fact that OpenStack is an open source infrastructure allows us to break the wall between those layers and re-think where we draw the line. For example, when we design our PaaS on OpenStack, there is no reason why we wouldn’t reuse the same security, metering, messaging and provisioning that is used to manage our infrastructure. The result is a much thinner and potentially more efficient foundation across all the layers that is easier to maintain. The new Heat project and Ceilometer in OpenStack are already starting to take steps in this direction and are, therefore, becoming some of the most active projects in the upcoming Havana release of OpenStack.

Looking Into the Future

Personally, I think that the world with OpenStack is by far healthier and brighter for the entire industry, as opposed to a world in which we are dependent on one or two major cloud providers, regardless of how good of a job they may or may not do. There are still many challenges ahead in turning all this into a reality and we are still at the beginning. The good news, though, is that there is a lot of room for contribution and, as I’ve witnessed myself, everyone can help shape this new world that we are creating.

OpenStack Birthday Events

To mark OpenStack’s 3rd Birthday, there will be a variety of birthday celebrations taking place around the world. At the upcoming OSCON event in Portland from July 22-26, OpenStack will host their official birthday party on July 24th. There will also be a celebration in Israel on the 21st, marking the occasion in Tel Aviv.

For more information about the Foundation’s birthday celebrations, visit their website at www.openstack.org.

Nati-GigaSpaces

Nati Shalom is the CTO and founder of GigaSpaces and founder of the Israeli cloud.org consortium.

 

Seeking Better IT Mileage? Take a Hybrid Out for a Spin

Guest Post by Adam Weissmuller, Director of Cloud Solutions at Internap

As IT pros aim to make the most efficient use of their budgets, there is a rapidly increasing range of infrastructure options at their disposal. Gartner’s prediction that public cloud spending in North America will increase from $2 billion in 2011 to $14 billion in 2016, and 451 Research’s expectation that colocation demand will outpace supply in most of the top 10 North American markets through 2014 are just two examples of the growing need for all types of outsourced IT infrastructure.

While public cloud services in particular have exploded in popularity, especially for organizations without the resources to operate their own data centers, a “one size fits all” myth has also emerged, suggesting that this is the most efficient and cost-effective option for all scenarios.

In reality, the public cloud may be the sexy new sports car – coveted for its horsepower and handling – but sometimes a hybrid model can be the more sensible approach, burning less gas and still getting you where you need to go.  It all depends on what kind of trip you’re taking. Or, put in data center terminology, the most effective approach depends on the type of application or workload and is often a combination of infrastructure services – ranging from public, private and “bare metal” cloud to colocation and managed hosting, as well as in-house IT resources.

The myth of cloud fuel economy
Looking deeper into the myth of “cloud costs,” as part of a recent “Data Center Services Landscape” report, Internap recently surveyed 100 IT decision makers to gain a cross-sectional view into their existing current and future use of IT infrastructure. Almost 65 percent of respondents said they are considering public cloud services, and 41 percent reported they are doing so in order to reduce costs.

But when you compare the “all-in” costs of operating thousands of servers over several years in a well-run corporate data center or colocating in a multi-tenant data center against the cost of attaining that same capacity on a pay-as-you-go basis via public cloud, the cloud service will lose out nearly every time.

The fact that colocation can be more cost-efficient than cloud often comes as a surprise to organizations and is something of a dirty little secret within the IaaS industry. But for predictable workloads and core infrastructure that is “always on,” the public cloud is a more expensive option because the customer ultimately pays a premium for pay-as-you-go pricing and scalable capacity that they don’t need – similar to driving a gas-guzzling truck even when there’s nothing you need to tow.

Balancing the racecar handling of cloud with the safety of a family hybrid
This is not to suggest that cloud is without its benefits. Public cloud makes a lot of sense for unpredictable workloads. Enterprises can leverage it to expand capacity on-demand without incurring capital expenditures on new servers. Workloads with variable demand and significant traffic peaks and valleys, such as holiday shopping spikes for online retailers or a software publisher rolling out a new product, are generally well-suited for public clouds because the customer doesn’t pay for compute capacity that they don’t need or use on a consistent basis.

One of the biggest benefits of cloud services is agility. This is where the cloud truly shines, providing accessibility and immediacy to the end-user.  However, the need for a hybrid approach also arises here, when agility comes at the expense of security and control. For example, the agility vs. control challenge is often played out in some version of the following use case:  A CIO becomes upset when she finds out that employees within most of the company’s business units are leveraging public cloud services – without her knowledge. This is especially unsettling, given that she has just spent millions of dollars building two new corporate data centers that were only half full. Something has gone wrong here, and it’s related to agility.

A major contributing factor to the surprise popularity of public cloud services is the perceived lack of agility of internal IT organizations. For example, it’s not uncommon for it to take IT executives quite some time to turn up new servers in corporate data centers. And this isn’t necessarily the fault of IT since there are a number of factors that can, and often do, present roadblocks, such as the need to seek budgetary approval, place orders, get various sign-offs, install the servers, and finally release the infrastructure to the appropriate business units – a process that can easily take several months. As a result, employees and business units often begin to side-step IT altogether and go straight to public cloud providers, corporate credit card in hand, in an effort to quickly address IT issues. The emergence of popular cloud-based applications made this scenario a common occurrence, and it illustrates perfectly how the promise of agility can end up pulling the business units toward the public cloud – at the risk of corporate security.

The CIO is then left scrambling to regain control, with users having bypassed many important processes that the enterprise spent years implementing. Unlike internal data centers or colocation environments, with a public cloud, enterprises have little to no insight into the servers, switches, and storage environment.

So while agility is clearly a big win for the cloud, security and control issues can complicate matters. Again, a hybrid, workload-centric approach can make sense. Use the cloud for workloads that aren’t high security, and consider the economics of the workload in the decision, too. Some hybrid cloud solutions even allow enterprises to reap the agility benefits of the cloud in their own data center – essentially an on-premise private cloud.

As businesses continue to evolve, it will be critical to go beyond the industry’s cloud hype and instead build flexible, centrally-managed architectures that take a workload-centric approach and apply the best infrastructure environment to the job at hand. Enterprises will find such a hybrid solution is usually of greater value than the sum of its individual parts.

Carpooling with “cloudy colo”
One area that has historically been left out of the hybridization picture is colocation. While organizations can already access hybridized public and private and even “bare metal” cloud services today, colocation has always existed in a siloed environment, without the same levels of visibility, automation and integration with other infrastructure that are often found in cloud and hosting services.

But these characteristics are likely to impact the way colocation services are managed and delivered in the future. Internap’s survey found strong interest in “cloudy colo” – colocation with cloud-like monitoring and management capabilities that provides remote visibility into the colocation environment and seamless hybridization with cloud and other infrastructure, such as dedicated and managed hosting.

Specifically, a majority of respondents (57 percent) cited interest in hybrid IT environments; and, combined with 72 percent of respondents expressing interest in hybridizing their colocation environment with other IT infrastructure services via an online portal, the results show strong emerging interest in data center environments that can support hybrid use cases as well as unified monitoring and management via a “single pane of glass.”

Driving toward a flexible future
A truly hybrid architecture – one that incorporates a full range of infrastructure types, from public and private cloud to dedicated and managed hosting, and even colocation – will provide organizations with valuable, holistic insight and streamlined monitoring and management of all of their resources within the data center, as well as consolidated billing.

For example, through a single dashboard, organizations could perform tasks, such as: remotely manage bandwidth, inventory, and power utilization for their colocation environment; rapidly move a maturing application from dedicated hosting to colocation; turn cloud services up and down as needed or move a cloud-based workload to custom hosting. Think of it as your hybrid’s in-car navigation system with touchscreen controls for everything from radio to air conditioning to your rear view camera.

The growing awareness of the potential benefits of hybridizing IT infrastructure services reflects the onset of a shift in how cloud, hosting and even colocation will be delivered in the future. The cloud model, with its self-service features, is one of the key drivers for this change, spurring interest among organizations in maximizing visibility and efficiency of their entire data center infrastructure ecosystem.

AdamWeissmuller

Adam Weissmuller is the Director of Cloud Solutions at Internap, where he led the recent launch of the Internap cloud solution suite. A 10-year veteran of the hosting industry, he recently presented on “Overcoming Latency: The Achilles Heel of Cloud Computing” at Cloud Expo West.

The Future of Tech Companies, the NSA, and Your Information

Guest Post by Lewis Jacobs

Verizon and the NSA

Last week, the technology world was turned upside down when the Guardian broke the news that the National Security Agency had directed telecommunications company Verizon to release customer call records and metadata on an “ongoing daily basis.”

Though the metadata doesn’t include the audio content of calls, it does include the phone numbers on both ends of calls, the devices and location of both parties involved, and the time and duration of calls.

The order was leaked by Edward Snowden, an analyst for defense contractor Booz Allen Hamilton at the NSA. The order targets both international and domestic calls, and it does not contain parameters for who can see the data or whether or not the data will be destroyed after NSA use.

Though the White House and the NSA say that the data will only be used for counter-terrorism efforts and other national security measures, the order nonetheless gives the federal government access to data from all of Verizon’s more than 100 million customers.

Since the story broke, there has been significant debate over whether the NSA is working within the regulations of the First and Fourth Amendments or whether it is violating citizens’ rights to free speech and privacy. The White House has defended the order as a necessary measure for national security. But critics, including the American Civil Liberties Union and several U.S. lawmakers, disagree.

What it means for the future

The controversy raises the question of whether or not other technology and telecommunications companies will be required to follow suit—or whether they already have. Amy Davidson at the New Yorker speculates that the leaked Verizon order is “simply one of a type—the one that fell off the truck.” Adam Banner at the Huffington Post wonders, “How many other ‘top secret’ court orders are currently in action with countless other information providers?”

The NSA is said to have been monitoring and collecting customer data from some of the world’s largest technology companies with the help of surveillance program PRISM. But many companies, including Google, Facebook, Microsoft, Yahoo, Apple, and AOL, have denied providing the government direct access to their users’ information. Google, one of the companies to deny any knowledge of PRISM, wrote an open letter to the Attorney General and the FBI requesting to make public any federal requests for data.

In any case, it’s unlikely that the NSA demanded customer information only from Verizon, meaning that the federal government could be (and probably is) accessing information about citizens through their phone providers, their email services, and their search engines. Faced with federal orders, there’s not much that technology companies can do in opposition.

The future of NSA technology surveillance will depend, of course, on its legality, which is yet to be determined. It’s unclear whether or not the NSA’s actions fall under the provisions of the Patriot Act, the FISA Amendments Act, the Constitution, and federal government’s system of checks and balances.

The American Civil Liberties Union recently announced their plan to sue the White House Administration for violating the privacy rights of Americans. On the other side, whistleblower Edward Snowden is currently under investigation for the disclosure of classified information, an offense that could result in life in prison.

This article was submitted by Lewis Jacobs, an avid blogger and tech enthusiast. He enjoys fixing computers and writing about internet trends. Currently he is writing about an internet in my area campaign for local internet providers.

Sources:

http://www.newyorker.com/online/blogs/closeread/2013/06/the-nsa-verizon-scandal.html

http://www.huffingtonpost.com/adam-banner/the-nsa-and-verizon

http://money.cnn.com/2013/06/11/technology/security/google-government-data/

http://money.cnn.com/2013/06/07/technology/security/nsa-data-prism/

http://www.washingtonpost.com/blogs/wonkblog/wp/2013/06/06/everything-you-need-to-know-about-the-nsa-scandal/

Mobile Payment Future Is Tied to Services

Guest Post by Nick Nayfack, Director of Payment Solutions, Mercury Payment Systems

Consumers are already using their smartphones when they shop. They just need the incentive to take the next step to making a purchase with their phone. According to Google, some 79 percent of consumers today can be considered “mobile shoppers” because they use their smartphones for browsing for product information, searching for product reviews or looking for offers and promotions. Today’s merchants see their customers browsing their store with smartphones and know that mobile marketing is no longer an option, it’s an imperative.

There is a clear opportunity to target avid smartphone users, as well as provide merchants with the ability to turn their point of sale system into a marketing engine simply by capturing their customers’ phone numbers. By creating a point of sale environment where processing becomes prospecting, mobile and alternative payments become a natural extension of the convenience and value that merchants and consumers are looking for. Not only can consumer use their phones in store to gain product information or exclusive offers, they can skip the checkout line by paying with their phone.  In this environment, mobile payments gain adoption because of the valuable service it provides to both the merchant and the consumer.

What is it that is driving merchants to adopt mobile point of sale systems (POS) – doubling their implementation in the past year – and consumer rapid adoption of smartphones – while mobile payments has yet to experience the same growth curve? The slow speed of adoption can be tied to two gaps in the current payment landscape: convenience and value. Merchants are adopting mobile POS systems because of their affordable pricing, the ease of use, and the ability to tie value-added services like loyalty programs and gift options to their customer’s checkout experience. Consumers are looking for more value for their money and more likely to sign up for opt-in marketing at the cash register or loyalty programs if they feel like they are getting something in return.

Where is the value in Mobile Payments today?

1. Information is Still Key

Consumers are using their phones now mostly to find product information, restaurant reviews, and discount offers.  90 percent of smartphone shoppers use their phone today for “pre-shopping” activities. The most common are price comparisons (53 percent), finding offers and promotions (39 percent), finding locations of other stores (36 percent) and finding hours (35 percent).  In contrast, consumer in-store purchases from a mobile device are still in the minority (~16 percent), but show promise for fast and exponential grow.  As such, if you want consumers to use your mobile payment application, there must be a tight alignment with other frequently used mobile applications (i.e. mobile search.)

2. Remember Your Basics

Key players in the mobile payments space need to make better UX by applying principles learned from the web many years ago: mobile-specific design, clear calls to action and one shopping experience across all platforms.  Beyond the UX, there needs to be clear and repeatable value to the consumer. Special offers or incentives could be paired with your current purchase history to make one-click purchases attractive from mobile devices. From a historical perspective, Amazon introduced this concept several years ago in the e-commerce world with links that provided suggestive purchases based on the buyer’s current purchase (e.g. others that bought this book, also bought the following). While m-commerce has different considerations such as limited time and high distraction of users, there can be some lessons learned from the past.

3. Find Today’s Value

POS developers will succeed today, and in the future by helping merchants to obtain and analyze information about their business and customers. This requires coordinating with an acquirer or processor that has rich historical data to help analyze transaction history, and other data. In this way, merchants can then personalize the consumer experience for new cost benefits or improve operations for cost savings.

Lastly, as mobile evolves, new data points will provide richer context (e.g. location, social context, sku data) and merchants will have even more reference points to deliver a personal consumer experience. In this way, personalization is the key value that is coupled with convenience.

Nick Nayfack

Nick Nayfack is the director of product for Mercury Payment Systems. He is responsible for developing best practices in mobile commerce with industry peers in order to help enable merchants and consumers to navigate technological “ease-of-use.” Nick is also a member of the Electronic Transaction Associations (ETA) Mobile payments committee.

The Rise in Popularity of Hybrid Cloud Infrastructure

Guest Post by Paul Vian of  Internap

Organizations are increasingly choosing to outsource business-critical applications and content to third-party providers. But, with it comes a long list of questions in order to determine the right mix of IT infrastructure services to meet specific scalability, control, performance and cost requirements. Although a shared public cloud can offer the convenience of easily scaling infrastructure up and down on-demand, many organizations are still hesitant due to concerns around privacy and security within a shared tenancy arrangement. Another complication is that the virtualization layer typically takes around 10 per cent of the resources. Accordingly, dedicated, physical infrastructure is often ideal just for performance purposes.

Which cloud environments are businesses considering?

If a business has a fluctuating workload that has ever-changing demands and requires more resources in the short term, a cloud environment is often still the preferred choice, but this does tend to become more expensive for applications that are always on, such as databases or other highly resource-intensive applications. The reality is that organizations often require something in between, which is driving demand for flexible, hybrid cloud infrastructure solutions that can easily adapt and scale to meet a wide range of use cases and application needs.

What are the benefits of a hybrid cloud infrastructure?

Taking a tailored approach can enable businesses to scale their infrastructures ‘on demand’. It is also now possible for companies utilising physical servers to gain the flexibility and benefits they have been enjoying within a highly virtualized cloud environment in recent years. We are in an age where physical servers can be instantly spun up or down as global demand dictates, so there is no reason why organizations can’t gain the convenience and agility of the cloud with the reliability and performance of physical servers.

How can companies achieve a hybrid cloud infrastructure tailored to their specific needs?

Ideally, companies should look to work with a third-party provider that can provide access to a broader mix of services to meet these emerging demands around scalability in particular. Through working with a provider that takes a consultative sales approach, businesses can benefit from a tailored service that allows them to seamlessly mix, provision and manage both their virtual cloud and physical IT infrastructure – whether this is legacy hardware or back-up equipment. With this approach, businesses can not only meet their diverse application requirements, but also easily address changing global business needs.
We are now seeing things coming full circle; from physical networks, through to virtualization and the cloud, to today’s move towards a hybrid approach. This is in response to the ever-growing sophistication of automation and self-service capabilities, and is the way forward for any forward-thinking organization with a complex list of requirements.

Paul_Side

Paul Vian is Internap’s director of business development for Europe, Middle East, and Africa

New Features, Bells and Whistles: Google I/O Conference

Guest Post by Paul Williams, a copywriter with InternetProviders.com

The Google I/O 2013 conference started with a bang on May 15th. Developers, tech journalists and venture capitalists crowded the Moscone Center in San Francisco, where CEO Larry Page and VP Amit Singhal delivered masterful keynotes that set the tone for the rest of the event.

Although Google I/O events are mostly for developers, the conference thus far has produced many interesting items for users to dissect and marvel at. In fact, the buzz surrounding the I/O conference has mostly been focused on developments and new features that will soon be ready to enhance the Google user experience. The major announcements are related to maps, music, finances, pictures, education, games, social networking, and search.

Providing Instant Answers with Conversation and Learning

Google is leaning on its Knowledge Graph to deliver a rich search experience that draws from a massive relational database that stores 570 million entries. According to Amit Singhal, Knowledge Graph will progressively learn from the queries entered by hundreds of millions of users. To this end, a film enthusiast searching for information about director Kathryn Bigelow, will instantly see highlights from her filmography, biographical data, reviews for Zero Dark Thirty, discussions about the possible remake of Point Break, and even more nuggets of information right on Google’s search engine results page (SERP).

Google is moving beyond the traditional keyboard-mouse-screen input methods of Internet search. “OK Google” is the new approach to conversational search. In this regard, Google’s plans for voice search have already impressed users and developers alike with an interface that will surely rival Apple’s Siri. The Google Now voice-activated personal assistant is also becoming smarter with reminders, recommendations and alerts that conform to each user’s search history and preferences.

Mapping and Finance

A revamped Google Maps for mobile devices will serve as a full-fledged handheld or in-vehicle navigator while the Maps version for tablets will feature an interface that encourages exploration. Google Wallet does no longer seem to be pursuing a debit-card strategy, although it intends to take on rival PayPal with an electronic funds transfer system powered by Gmail.

Advanced Social Networking

More than a dozen new features have been added to Google Plus (G+), the search giant’s promising social network. One of the most significant upgrades is Babel, a communication tool that integrates G+ Hangouts with other messaging applications such as Voice, Talk, Gmail, and the G+ Messenger.

Google is borrowing a page from Twitter with its own set of hash tags for G+. These smart tags will search across the G+ network for user-generated content that can be analyzed and organized by hash tags that can be clicked and expanded to reveal related content. This is similar to the discontinued Google Sparks feature of G+.

The most visible G+ upgrade can be appreciated in its user interface. Multiple columns that stream updates with animated transitions and photos retouched with Google’s patent “I’m feeling lucky” style of image editing make for a much more visually-pleasing experience on G+.

Streaming Music and Game Services

Google Play is no longer limited to solely serving as a marketplace for Android apps. For less than $10 per month, users can listen to unlimited tracks streamed from Google Play’s vast online music library. Users will be able to listen from their Android mobile devices or from compatible Web browsers.

Gamers will now be able to begin playing a game on their smartphones or tablets and later resume playing on a different device or Web browser. This is similar to the popular Xbox Live online gaming service from Microsoft, although Google plans to let developers come up with third-party gaming apps on Apple iOS and non-Chrome browsers.

10072_10077_1_Avatar

Paul Williams is a part-time tech blogger, and full-time copywriter with InternetProviders.com.  You can contact him via email.

File Shares & Microsoft SharePoint: Collaboration Without Limitations

Guest Post by Eric Burniche of AvePoint.

File Shares can be a blessing and a curse when it comes to storing large quantities of data for business use. Yes, you enable a large number of users to access the data as if it were on their local machines, without actually having the data stored where disc space may be at a premium. But native management capabilities of file shares aren’t always ideal, so a third-party solution is necessary to fully optimize your file shares.

The primary benefit of file shares is simple, quick, and easy access to large volumes of data for large volumes of users at marginal infrastructure cost. With little or no training required, users can easily access file shares that consist of individual documents to large files and rich media like videos, audio and other formats than can range up to gigabytes (GB) in size.

The Simple Truth: Organizations are quickly realizing native file share limitations, including notoriously poor content management capabilities for search, permissions, metadata, and remote access. As a result, many have turned to Microsoft SharePoint to manage and collaborate on their most business-critical information and valued data.

The Problem: Organizations have various types of unstructured content on their file servers, which is data characterized as non-relational data– e.g. Binary Large Objects (BLOBs) — that when uploaded into SharePoint, are stored by default with the platform’s Microsoft SQL Server database. Once file share content is uploaded, the overall time taken to remove unstructured content from a structured database is inefficient, resulting in poor performance for SharePoint end-users and exponential storage cost increases for IT administrators.

Difficulty often arises when determining what content is business critical and should be integrated with SharePoint as compared to what content should be left alone in file shares, decommissioned, or archived according to business need. File types and sizes also create difficulty when integrating file share content with SharePoint because SharePoint itself blocks content types like Microsoft Access project files, .exe, .msi, .chm help files, and file sizes exceeding 2 GB violate SharePoint software boundaries and limitations.

The Main Questions: How can my organization utilize SharePoint to retire our legacy file share networks while avoiding migration projects and performance issues? How can my organization utilize SharePoint’s full content management functionality if my business-critical assets are blocked file types or larger than Microsoft’s 2 GB support contracts?

One Solution: Enter DocAve File Share Navigator 3.0 from AvePoint. DocAve File Share Navigator 3.0 enables organizations to increase file share activity and take full advantage of SharePoint’s content management capabilities, all while avoiding costs and disruptions associated with migration plans.
With DocAve File Share Navigator, organizations can:

  • Expose large files, rich media via list links, including blocked files more than 2 GB, into SharePoint without violating Microsoft support contracts to truly consolidate access to all enterprise-wide content
  • Decrease costs associated with migrating file share content into SharePoint’s SQL Server content databases by accessing file share content through SharePoint
  • Allow remote users to view, access, and manage network files through SharePoint without requiring a VPN connection
  • Direct access for local file-servers through SharePoint without burden on web front end servers
  • Increase file share content discoverability by utilizing SharePoint’s full metadata-based search across multiple, distributed file servers
  •  Allow read-only previews of documents for read-only file servers

The native capabilities of file shares are unlikely to improve, but fortunately there are third-party solutions such as DocAve File Share Navigator that can help turn your file share from a headache to an asset, allowing you to continue to collaborate with confidence.

Eric_Burniche

Eric Burniche is a Product Marketing Manager at AvePoint.