Can AI help you get your next job?

There is an increasing number of AI-based tools that are being used in the hiring and HR process. I am not sure whether this is a benefit to job seekers and to the employment business. Certainly, there are plenty of horror stories, such as this selection from 2020’s most significant AI-based failures such as deepfake bots, biased predictions of pre-criminal intent, and so forth. (And this study by Pew is also worth reading.)

I would argue that AI has more of a PR than HR problem, with the mother lode being traced back to the Terminator movies and Minority Report, with Asimov’s Three Laws of Robotics thrown in for good measure. In a post that I did for Avast’s blog last fall, I examined some of the ethical and bias issues around AI. Part of the issue is that we still need to encode human judgment into some digital form. And people aren’t as consistent as machines — which sometimes is a useful thing. I will give you an example at the end of this post.

But let’s examine what is going on with HR-related AI. In a study done last year by HRExaminer, identified a dozen hiring-based AI tools, with half of them focusing on the recruiting function. I would urge you to examine this list and see if any of them are being used at your workplace, or as part of your own job search and hiring process.

One of the ones on the list is HiredScore, which offers an all-purpose HR solution using various AI methods to rank potential job candidates, recommend internal employees for open positions, and measure inclusion and diversity. That is a lot of places where the doomsday “Skynet” scenario of the machines taking over could happen, and is probably one of the few plot lines that Philip K. Dick never anticipated. Still, the company claims they have trained their machine learning algorithms with more than 25M resumes and twice as many job postings.

There are other niche products, such as Xref’s online reference checking or the testing prowess of TestGorilla. The latter offers a library of more than 135 “scientifically validated tests” for job-specific skills like coding or digital marketing, as well as more general skills like critical thinking. That one struck another nerve for me. The reason I put that phrase in quotes is because I can’t validate its claim.

As many of you who have followed my work have found out, my first job in publishing was working for PC Week when it was part of the Ziff Davis corporation. ZD had a rule that required every potential hire to submit to a personality test before getting a job offer. I have no recollection of the actual test questions all these years later, but obviously I passed and so began my writing career.

In the modern era, we now have vendors that use AI tools to help screen applicants.  I am not sure I would have passed these tests if ZD had them available back in the day. That doesn’t make me feel better about using AI to help assist in this process.

Let me give you a final example. When I went to visit my daughter last month, I was given a specific time period that I was allowed to enter Israel. Only it wasn’t specific: the approval was granted for “two weeks” but not starting from any specific time of day. I interpreted it one way. The German gate agents at Frankfort interpreted another way. Ultimately, the Israeli authorities at the airport agreed with my point of view and let me board my final flight. If a machine had screened me, I would have probably not been allowed to enter Israel.

In my post for Avast’s blog last year, I mention several issues surrounding bias: in the diversity of the programming team creating the algorithms, in understanding the difference between causation and correlation, and in interpreting the implied ethical standards of the actual algorithms themselves. These are all tricky issues, and made even more so when you are deciding on the fate of potential job applicants. Proceed with caution.

Avast blog: It ain’t easy to remove your personal data from the brokers

I tried to remove my own data recently and found it to be a very frustrating online rabbit hole. You will find either task to be nearly impossible and, sadly, this is by intent and by design: They charge by the gigabyte and aren’t paid for being accurate. And you don’t pay them anything, so you aren’t really the customer; you are just the unwilling victim. 

Note: these brokers are the legitimate side of selling your data, and not to be confused with the dark web illegal side, such as the recent scraping of 700M LinkedIn users. FIghting that is for another post.

I started out my own quest by submitting removal requests for my data to three places: Epsilon, Experian, and Intelius. I picked these somewhat at random, but the trio gives you a good idea of what you are in for. My journey through this looking glass is chronicled for my latest blog post for Avast here.

Webinar on overcoming fragmented data and improving the customer experience

In today’s changing times, tech companies must renew their focus on customers, and use their data effectively to create a holistic, 360-degree view of those customers. With this view in place, they can both improve the customer experience and better inform product development in order to attract new customers and retain existing customers. Facing fragmented data, slow and fragile data pipelines, growing demands and increasing costs, legacy data warehouse solutions are no longer sufficient. Enter next gen Cloud Data Platforms. With integrated data and seamless sharing, tech companies can now serve real-time analytics, scale up operations, and enhance the customer experience. This will take you to a webinar that I did for Snowflake. 

Where Moneyball meets addiction counseling

A startup here in St. Louis is trying to marry the analytics of the web with the practice of addiction counseling and psychotherapy. In doing so, they are trying to bring the methods of Moneyball to improve therapeutic outcomes. It is an interesting idea, to be sure.

The firm is called Takoda, and it is the work of several people: David Patterson Silver Wolf, an academic researcher; Ken Zheng, their business manager; Josh Fischer, their co-founder and CTO; and Jake Webb, their web developer. I spoke to Fischer who works full time for Bayer, and supports Takoda on his own time as they bootstrap the venture. “It is hard to put all the various pieces together in a single company, which is probably why no one else has tried to do this before,” he told me recently.

The idea is to measure therapists based on patient performance during treatment, just like Moneyball measured runs delivered by each baseball player as their performance measurement. But unlike baseball, there is no single metric that everyone has created, certainly not as obvious as RBIs or homers.

We are at a unique time in the healthcare industrial complex today. Everyone has multiple electronic health records that are stored in vast digital coffins; so named because this is where data usually goes to die. Even if we see mostly doctors in a single practice group, chances are our electronic medical records are stored in various data silos all over the place, without the ability to link them together in any meaningful fashion.

On top of this, the vast majority of therapists have their own paper-based data coffins: file cabinets full of treatment notes that are rarely consulted again. Takoda is trying to open these repositories, without breaching any patient data privacy or HIPAA regulations.

Part of the problem is that when someone seeks treatment, they don’t necessary learn how to get better or move beyond their addiction issues while they are in their therapist’s office. They have to do this on their own time, interacting with their families and friends, in their own communities and environment.

Another part of the problem is in how we select a therapist to see for the first time. Often, we get a personal referral, or else we hear about a particular office practice. When we walk in the door, we are usually assigned a therapist based on who is “up” – meaning the next person who has the lightest caseload or who is free at that particular moment when a patient walks in the door. This is how many retail sales operations work. The sole design criterion was to evenly distribute leads and potential customers. That is a bad idea and I will get to why in a moment.

Finally, the therapy industry uses two modalities that tend to make success difficult. One is that “good enough” is acceptable, rather than pursuing true excellence or curing a patient’s problem. When we seek medical care for something physically wrong with us, we can find the best surgeon, the best cardiologist, the best whatever. We look at their education, their experience, and so forth. Patients don’t have any way to do this when they seek counseling. The other issue is that therapists aren’t necessarily rewarded for excellence, and often practices let a lot of mediocre treatment slide. Both aren’t optimal, to be sure.

So along comes Takoda, who is trying to change how care is delivered, how success is measured, and whether we can match the right therapists to the patients to have the best treatment outcomes. That is a tall order, to be sure.

Takoda put together its analytics software and began building its product about a year ago. First they thought they could create something that is an add-on to the electronic health systems already in use, but quickly realized that wasn’t going to be possible. They decided to work with a local clinic here. The clinic agreed to be a proving ground for the technology and see if their methods work. They picked this clinic for geographic convenience (since the principals of the firm are also here in St. Louis) and because they already see numerous patients who are motivated to try to resolve their addiction issues. Also, the clinic accepts insurance payments. (Many therapists don’t deal with insurers at all.) They wanted insurers involved because many of them are moving in the direction of paying for therapy only if the provider can measure and show patient progress. While many insurers will pay for treatment, regardless of result, that is evolving. Finally, the company recognized that opioid abuse has slammed the therapy world, making treatment more difficult and challenging existing practices, so the industry is ripe for a change. Takoda recognizes that this is a niche market, but they had to start somewhere. “So we are going to reinvent this industry from the ground up,” said Fischer.

So what does their system do? First off, it uses research to better match patients with therapists, rather than leave this to chance or the “ups” system that has been used for decades. Research has shown that matching gender and race between the two can help or hurt treatment outcomes, using very rough success measures.

Second, it builds in some pretty clever stuff, such as using your smartphone to create geofences around potentially risky locations for each individual patient, and providing a warning signal to encourage the patient to steer clear of these locations.

Finally, their system will “allow practice offices to see how their therapists are performing and look carefully at the demographics,” said Fischer. “We have to change the dynamic of how therapy care is being done and how therapists are rated, to better inform patients.”

It is too early to tell if Takoda will succeed or not, but if they do, the potential benefits are clear. Just like in Moneyball, where a poorly-performing team won more games, they hope to see a transformation in the therapy world with a lot more patient “wins” too.

A new way to do big data with entity resolution

I have this hope that most of you reading this post aren’t criminals, or terrorists. So this might be interesting to you, if you want to know how they think and carry out their business. Their number one technique is called channel separation, the ability to use multiple identities to prevent them from being caught.

Let’s say you want to rob a bank, or blow something up. You use one identity to rent the getaway car. Another to open an account at the bank. And other identities to hire your thugs or whatnot. You get the idea. But in the process of creating all these identities, you aren’t that clever: you leave some bread crumbs or clues that connect them together, as is shown in the diagram below.

This is the idea behind a startup that has just come out of stealth called Senzing. It is the brainchild of Jeff Jonas. The market category for these types of tools is called entity resolution. Jonas told me, “Anytime you can catch criminals is kind of fun. Their primary tradecraft holds true for anyone, from bank robbers up to organized crime groups. No one uses the same name, address, phone when they are on a known list.” But they leave traces that can be correlated together.

Jonas started working on this many years ago at IBM. He is trying to disrupt the entity resolution market and eventually spun out Senzing with his tool. The goal is that you have all this data and you want to link it together, eliminate or find duplicates, or near-duplicates. Take our criminal, who is going to rent a truck, buy fuel oil and fertilizer, and so forth. He does so using the sample identities shown at the bottom of the graphic. Senzing’s software can parse all this data and within a matter of a few minutes, figure out who Bob Smith really is. In effect, they merge all the different channels of information into a single, coherent whole, so you can make better decisions.

Entity resolution is big business. There are more than 50 firms that sell some kind of service based on this, but they offer more of a custom consulting tool that requires a great deal of care and feeding and specialized knowledge. Many companies end up with million-dollar engagements by the time they are done. Jonas is trying to change all that and make it much cheaper to do it. You can run his software on any Mac or Windows desktop, rather than have to put a lot of firepower behind the complex models that many of these consulting firms use.

Who could benefit from his product? Lots of companies. For example, a supply chain risk management vendor can use to scrape data from the web and determine who is making trouble for a global brand. Or environmentalists looking to find frequent corporate polluters. A finservices firm that is trying to find the relationship between employees and suspected insider threats or fraudulent activities. Or child labor lawyers trying to track down frequent miscreants. You get the idea. You know the data is out there in some form, but it isn’t readily or easily parsed. “We had one firm that was investigating Chinese firms that had poor reputations. They got our software and two days later were getting useful results, and a month later could create some actionable reports.” The ideal client? “Someone who has a firm that may be well respected, but no one actually calls” with an engagement, he told me.

Jonas started developing his tool when he was working at IBM several years ago. I interviewed him for ReadWrite and found him fascinating. An early version of his software played an important role in figuring out the young card sharks behind the movie 21 were taking advantage of card counting in several Vegas casinos, and was able to match up their winnings all over town and get the team banned.  Another example is from  Colombia universities who saved $80M after finding 250,000 fake students being enrolled.

IBM gets a revenue share from Senzing’s sales, which makes sense. The free downloads are limited in terms of how much data you can parse (10,000 records), and they also sell monthly subscriptions that start at up to $500 for the simplest cases. It will be interesting to see how widely his tool will be used: my guess is that there will be lots of interesting stories to come.

FIR B2B Podcast #89: Fake Followers and Real Influence

The New York Times last week published the results of a fascinating research project entitled The Follower Factory, that describes how firms charge to add followers, retweets, likes and other social interactions to social media profiles. While we aren’t surprised at the report, it highlights why B2B marketers shouldn’t shortcut the process of understanding the substance of an influencer’s following when making decisions about whom to engage. The Times report identifies numerous celebrities from entertainment, business, politics, sports and other areas who have inflated their follower numbers for as little as one cent per follower. In most cases, the fake followers are empty accounts without any influence or copies of legitimate accounts with subtle tweaks that mask their illegitimacy.

The topic isn’t a new one for either of us. Paul wrote a book on the topic more than ten years ago. Real social media influencers get that way through an organic growth in their popularity, because they have something to say and because people respond to them over time. There is no quick fix for providing value.

Twitter is a popular subject for analysis because it’s so transparent: Anyone can investigate follower quality and root out fake accounts or bots by clicking on the number of followers in an influencer’s profile. Other academic researchers have begun to use Twitter for their own social science research, and a new book by UCLA professor Zachary Steinert-Threkeld called Twitter as Data is a useful place for marketers who know a little bit of code to assemble their own inquiries. (The online version of the book is presently free from the publisher for a limited time.) David has written more about his book on his blog here

Paul and David review some of their time-tested techniques to growing your social media following organically, and note the ongoing value of blogs as a tool for legitimate influencers to build their followings.

You can listen to our 16 min. podcast here:

Researching the Twitter data feed

A new book by UCLA professor Zachary Steinert-Threkeld called Twitter as Data is available online free for a limited time, and I recommend you download a copy now. While written mainly for academic social scientists and other researchers, it has a great utility in other situations.

Zachary has been working with analyzing Twitter data streams for several years, and basically taught himself how to program enough code in Python and R to be dangerous. The book assumes a novice programmer, and provides the code samples you need to get started with your own analysis.

Why Twitter? Mainly because it is so transparent. Anyone can figure out who follows whom, and easily drill down to immediately see who are these followers, and how often they actually use Twitter themselves. Most Twitter users by default have open accounts, and want people to engage them in public. Contrast that with Facebook, where the situation is the exact opposite and thus much harder to access.

To make matters easier, Twitter data comes packaged in three different APIs, streaming, search and REST. The streaming API provides data in near-real-time and is the best way to get data on what is currently trending in different parts of the world. The downside is that you could be picking a particularly dull moment in time when nothing much is happening. The streaming API is limited to just one percent of all tweets: you can filter and focus on a particular collection, such as all tweets from one country, but still you only get one percent.That works out to about five million tweets daily.

Many researchers run multiple queries so they can collect more data, and several have published interesting data sets that are available to the public. And there is this map that shows patterns of communication across the globe over an entire day.

The REST API has limits on how often you can collect and how far back in time you can go, but isn’t limited to the real-time feed.

Interesting things happen when you go deep into the data. Zachary first started with his Twitter analysis, he found for example a large body of basketball-related tweets from Cameroon, and upon further analysis linked them to a popular basketball player (Joel Embiid) who was from that country and lot of hometown fans across the ocean. He also found lots of tweets from the Philippines in Tagalog were being miscataloged as an unknown language. When countries censor Twitter, that shows up in the real-time feed too. Now that he is an experienced Twitter researcher, he focuses his study on smaller Twitterati: studying the celebrities or those with massive Twitter audiences isn’t really very useful. The smaller collections are more focused and easier to spot trends.

So take a look at Zachary’s book and see what insights you can gain into your particular markets and customers. It won’t cost you much money and could payoff in terms of valuable information.

 

 

When anonymous web data isn’t anymore

One of my favorite NY Times technology stories (other than, ahem, my own articles) is one that ran more than ten years ago. It was about a supposedly anonymous AOL user that was picked from a huge database of search queries by researchers. They were able to correlate her searches and tracked down Thelma, a 62-year old widow living in Georgia. The database was originally posted online by AOL as an academic research tool, but after the Times story broke it was removed. The data “underscore how much people unintentionally reveal about themselves when they use search engines,” said the Times story.

In the intervening years since that story, tracking technology has gotten better and Internet privacy has all but effectively disappeared. At the DEFCON trade show a few weeks ago in Vegas, researchers presented a paper on how easy it can be to track down folks based on their digital breadcrumbs. The researchers set up a phony marketing consulting firm and requested anonymous clickstream data to analyze. They were able to actually tie real users to the data through a series of well-known tricks, described in this report in Naked Security. They found that if they could correlate personal information across ten different domains, they could figure out who was the common user visiting those sites, as shown in this diagram published in the article.

The culprits are browser plug-ins and embedded scripts on web pages, which I have written about before here. “Five percent of the data in the clickstream they purchased was generated up by just ten different popular web plugins,” according to the DEFCON researchers.

So is this just some artifact of gung-ho security researchers, or does this have any real-world implications? Sadly, it is very much a reality. Last week Disney was served legal papers about secretly collecting kid’s usage data of their mobile apps, saying that the apps (which don’t ask parents permission for the kids to use, which is illegal) can track the kids across multiple games. All in the interest of serving up targeted ads. The full list of 43 apps that have this tracking data can be found here, including the one shown at right.

So what can you do? First, review your plug-ins, delete the ones that you really don’t need. In my article linked above, I try out Privacy Badger and have continued to use it. It can be entertaining or terrifying, depending on your POV. You could regularly delete your cookies and always run private browsing sessions, although you do give up some usability for doing so.

Privacy just isn’t what it used to be. And it is a lot of hard work to become more private these days, for sure.

Everyone is now a software company (again)

Several years ago I wrote, “everyone is in the software business. All of the interesting business operations are happening inside your company’s software.” Since then, this trend has intensified. Today I want to share with you three companies that should come under the software label. And while you may not think of these three as software vendors, all three run themselves like a typical software company.

The three are Tesla, Express Scripts, and the Washington Post. It is just mere happenstance that they also make cars, manage prescription benefits and publish a newspaper. Software lies at the heart of each company, as much as a Google or a Microsoft.

In my blog post from 2014, I talked about how the cloud, big data, creating online storefronts and improving the online customer experience is driving more companies to act like software vendors. That is still true today. But now there are several other things to look for that make Tesla et al. into software vendors:

  • Continuous updates. One of the distinguishing features of the Tesla car line is that they update themselves while they are parked in your garage. Most car companies can’t update their fleet as easily, or even ever. You have to bring them in for servicing, to make any changes to how they operate. Tesla’s dashboard is mostly contained inside a beautiful and huge touch LED screen: the days of dedicated dials are so over. These continuous updates are also the case for The Washington Post website, so they can stay competitive and current. The Post posts more total articles than the NYTimes with double the reporting staff of the DC-based paper. That shows how seriously they take their digital mission too.
  • These companies are driven by web analytics and traffic and engagement metrics. Just like Google or some other SaaS-based vendor, The Washington Post post-Bezos is obsessed with stats. Which articles are being read more? Can they get quicker load times, especially on mobile devices? Will readers pay more for this better performance? The Post will try out different news pegs for each piece to see how it performs, just like a SaaS vendor does A/B testing of its pages.
  • Digital products are the drivers of innovation. “There are no sacred cows [here, we] push experimentation,” said one of the Post digital editors. “It is basically, how fast do you move? Innovation thrives in companies where design is respected.” The same is true for Express Scripts. “We have over 10 petabytes of useful data from which we can gain insights and for which we can develop solutions,” said their former CIO in an article from several years ago.
  • Scaling up the operations is key. Tesla is making a very small number of cars at present. They are designing their factories to scale up, to where they can move into a bigger market. Like a typical SaaS vendor, they want to build in scale from the beginning. They built their own ERP system that shortens the feedback loop from customers to engineers and manages their entire operations, so they can make quick changes when something isn’t working. You don’t think of car companies being so nimble. The same is true for Express Scripts. They are in the business of managing your prescriptions, and understanding how people get their meds has become more of a big data problem. They can quickly figure out if a patient is following their prescription and predict the potential pill waste if they aren’t. The company has developed a collection of products that tie in an online customer portal to their call center and mobile apps.

I am sure you can come up with other companies that make normal stuff like cars and newspapers that you can apply some of these metrics to. The lessons learned from the software industry are slowly seeping into other businesses, particularly those businesses that want to fail fast and more quickly as their markets and customers change.

SecurityIntelligence blog: Tracking Online Fraud: Check Your Mileage Against Endpoint Data

A recent Simility blog post detailed how it is tracking online fraud. With the help of a SaaS-based machine learning tool, the company and its beta customers have seen a 50 to 300 percent reduction in fraudulent online transactions. This last January, they looked at 100 different behaviors across 500,000 endpoints scattered around the world. They found more than 10,000 of those devices were compromised, and then looked for patterns of similar behavior. They found seven commonalities, and some of them are surprising.

You can read my blog post on IBM’s SecurityIntelligence.com here.