Making a case for moving legacy apps to the cloud is becoming easier, with the biggest driver being the ability to shift costs from capital to operating expenses, which can save money. Also, renting capacity rather than owning servers and network infrastructure allows more flexibility in how computing resources are provisioned, enabling workloads to be matched to demand. Quick provisioning is key: New servers can be brought up in the cloud in just minutes, not only making it easier to improve availability but also enabling more flexible disaster recovery mechanisms.
This get-up-to-speed guide explores the key approaches to migrating legacy apps to the cloud, and the value each can bring to your business. You can download my guide here.
DataCore’s comprehensive storage services stack has long been known for harnessing ultra-fast processors and RAM caches in x86 servers, for superior performance and enterprise-class availability. It now comes in a compact, hyper-converged package that is ideal for transactional databases and mixed workloads. DataCore Virtual SAN software is available for a free 30-day trial. It runs on any hypervisor and your choice of standard servers.
We tested DataCore Virtual SAN in May 2015.
Pricing: DataCore-authorized solution providers offer software packages starting under $10,000 for a two-node, high-availability cluster, including annual 24×7 support.
Requirements: Windows Server 2012 R2
For information on DataCore’s SANsymphony-V Software-defined Storage Platform, check out our other video here.
And for a copy of our white paper on hyper-converged storage, download our paper here.
Storage has seen its share of technology changes in recent years, but the most significant breakthrough isn’t higher capacity arrays, it’s the shift to software-defined storage. One of the reasons many enterprises are embracing this new paradigm is that in recent decades, managing storage has been a specialized skill set which has fostered organizational silos among other issues.
In this free e-Book that I wrote for VMware, I explore:
- How virtualization and cloud management impact storage management
- Implications of the control plane transitioning from hardware-centric to app-centric
- The role of VMware hypervisor in managing storage
Mobile banking has the opportunity to become just as disruptive in the modern era as ATMs were back in the 1970s. From the convenience of our own homes, and with our own devices, we now have the opportunity to do just about everything except get cash from our bank.
I have been a mobile banking customer for the past several years. As an independent businessman, I get paid with a lot of checks from my clients. It used to be a chore to walk on over to the ATM to wait for a free machine to deposit them. Now I rarely visit the ATM, and having my bank email me a receipt is a nice touch. Plus, I can quickly pay my bills from my mobile phone too, so I am using my Web-based online banking access less and less. Mobile banking is not just convenient; it’s a great time-saver!
In this white paper that I wrote for Vasco and is authored by Will LaSala and Benjamin Wyrick, we see the results of some research around what consumers want from their mobile banking applications, discuss some of the current issues surrounding the evolution of mobile banking, and finally, review best practices that will help secure mobile banking apps without compromising user experience.
On the Internet, the bad guys are sadly winning the war against banks and other financial institutions. Cybercriminals are becoming more sophisticated, deploying blended threats against banking and payment networks, and using multiple access methods to steal money. Their market share is increasing too. This isn’t good news for legitimate businesses that want to stop money laundering, e-commerce threats, account takeovers, pre-paid debit card abuse and other online banking exploits.
Two-factor exploits (such as Emmental) have also grown, making three or more factor methods more important. And as more banking is done using mobile applications, institutions are faced with more challenging security requirements as customers can authenticate and conduct their business from anywhere and with any device.
In a white paper here, I describe these problems and how using a risk-based authentication approach can protect the entire lifecycle of banking activities as well as satisfy the needs of users for convenient and transparent access to their accounts.
The Internet is a nasty place, and getting nastier. Current breach detection products using traditional anti-malware sandbox technologies can’t keep up with advanced persistent and hyper-evasive threats that pummel enterprise networks on an hourly basis. Malware authors encode their exploits with a number of operational vectors, so in case one entry point doesn’t work they can still find a way into your network to do their dirty work. And as more businesses hire more outsourced consultants, part-time workers, and employ mobile devices, they open up additional mechanisms for malware to enter their corporate networks.
Some traditional AV and endpoint protection vendors have responded to these threats by adding features to their security products to do a better job of anticipating badly behaving packets coming through their detectors. They make use of limited virtual machines or operating system emulators to view how a piece of malware operates. That is great, but it isn’t enough. Many malware authors can detect when these simulated environments are active and can evade detection accordingly. For example, some exploits such as W32.DelfInj can literally go to sleep for several days to avoid any detector that will just scan an infected system for the first several minutes.
What is needed is a next-generation sandbox that can correlate a series of particular breach events add IP and object based reputation analysis and do this in near real-time. This is what the Lastline Breach Detection Platform does. What makes them unique is their range of discovery, the way they can effectively mimic actual PC or smartphone endpoints to examine malware behavior, and how they can scale up to handle very large networks with their modular and SaaS-based tools.
Download my review of their system here.
Training is an investment, and building and sharpening an IT team’s skill set
is critical to the well-being of every enterprise. The good news is that there are lots of options out there, from expensive in-person seminars and trade shows to online classes. The bad news is that there are lots of options out there. Finding the right mix of training for your team isn’t a simple undertaking.
In this white paper for ITWorld, I talk about the ROI of online training, point out some resources to consider, whether or not to consider a certification program, and ways to assess your personal learning style.
Enterprise mobility management (EMM) is a marathon, not a sprint, so you must be thinking about what you need today with the tools available, and be planning for the future. At the core, enterprises need stability and scale, so how do they choose the right solution? Analysts say this is the year to review your EMM strategy or develop one if you haven’t already. There are a lot of companies vying for the enterprise business with tools that have varying degrees of functionality. I wrote a white paper for ITworld that explores the journey as you manage this moving target.
You can download the paper here, reg. is required.
The days when IT could tell end users what kinds of computing gear to purchase and use ended sometime in the 1990s, but for many years afterwards IT retained a stranglehold on the deployment and maintenance of enterprise infrastructure, corporate-wide applications, and building data centers. Those days are quickly becoming another memory for IT departments, who have seen the evolution of customer-facing applications and the Web- and cloud-based worlds that have arisen. These apps are changing the way that IT delivers its services, builds its enterprise architectures and selects its systems.
In a paper that I wrote for GigaOM,, I suggest ways to evaluate technology decisions from the perspective of customer experience and suggest metrics that can help businesses justify and benchmark the success of their future IT investments.
The days when restaurants could rely exclusively on good food, an enjoyable ambiance, and word-of-mouth advertising are quickly coming to an end. More and more restaurants are discovering that they must use consumer-facing connected technologies such as websites, social networks, and mobile apps just to stay competitive.
Connected technology empowers restaurant customers. Consumers can locate restaurants, make reservations, browse menus and nutrition information, order food for delivery or pickup, pay for meals, and instantly redeem rewards. Connected technology can also free consumers from having to carry around an assortment of credit cards, debit cards, loyalty cards, gift cards, and printed coupons.
Connected technology empowers restaurant merchants. Owners and managers want to keep in touch with customers, accept online and mobile orders and payments, increase sales and tips, and respond quickly and effectively to complaints. With competitors constantly showing up on their customers’ screens, they can’t afford not to use connected technology.
There is a lot more that Ira Brodsky and I have to say on the subject: