SiliconANGLE: That next computer in the cloud could be an IBM mainframe

A small Minneapolis mainframe computer software startup is poised to change the way enterprises use and share data across the cloud.

Virtual Z Computing Inc. claims to be the first and only women-founded and women-led mainframe systems integrator in history. That is a bold position, but perhaps more important is its pair of revolutionary software applications called Lozen and Zaac that connect native mainframe data with various third-party distributed, cloud-based applications.

I explain how the company’s products fit into the future of cloud computing in this story for SiliconANGLE here. 

SiliconANGLE: Databases then and now: the rise of the digital twin

When I first started in IT, back in the Mainframe Dark Ages, we had hulking big databases that ran on IBM’s Customer Information Control System, written in COBOL. These mainframes ran on a complex collection of hardware and operating systems that was owned lock, stock, and bus and tag barrel by IBM. The average age of the code was measured in decades, and code changes were measured in months. They contained millions of transactions, and the data was always out of date since it was a batch system, meaning every night new data would be uploaded.

Contrast that to today’s typical database setup. Data is current to the second, code is changed hourly, and the nature of what constitutes a transaction has changed significantly to something that is now called a “digital twin,” which I explain in my latest post for SiliconANGLE here.

Code is written in dozens of higher-level languages that have odd names that you may never have heard of, and this code runs on a combination of cloud and on-premises equipment that uses loads of microprocessors and open source products that can be purchased from hundreds of suppliers.

It really is remarkable, and that these changes have happened all within the span of a little more than 35 years. You can read more in my post.

 

 

SiliconANGLE: CIOs’ relationship with AI is complicated, but they have hopes for a promising future

Artificial intelligence — its value, risks and utility in enterprise scenarios — not surprisingly dominated the discussion at this week’s MIT CIO Symposium, one of the year’s biggest gatherings of senior information technology executives. In this post for SiliconANGLE, Paul Gillin and I review what some of the CIO panelists revealed about the state of their domains, and their relationship with AI tools.

SiliconANGLE: We need more breach transparency, but a lot of obstacles are in the way

The U.K.’s National Cyber Security Center last week posted a joint blog with the nation’s regulatory commissioner’s office about the need for better cybersecurity breach transparency. They’re concerned about the unreported incidents, in particular ransomware cases, which are getting more dangerousmore prevalent and more costly. The situation creates a vicious cycle: “If attacks are covered up, the criminals enjoy greater success, and more attacks take place,” they wrote in the post.

In this analysis for SiliconANGLE, I look at the implications for designing the next generation of customer support systems using AI enhanced tools.

SiliconANGLE: AI-based chatbots can help improve customer support – if they’re done right

Most of us have been interacting with customer support agents for years. It can be a frustrating experience: Oftentimes the agent knows less than we do about their product or service, calls are dropped or transferred to other agents. About two years ago, I had such a bad experience with AT&T Inc.’s customer support that I ended up cancelling my cell and internet service with the company.

But now there are artificial intelligence chatbots and chat programs that are supposed to make our lives better. With all the attention focused on ChatGPT and other AI-based chatbots, a new long-term research study has found that AI can help improve support, but only under carefully controlled situations. Let’s examine the specific circumstances and what’s in store for the future of support. In this post for SiliconANGLE, I dive into what they found and make some recommendations on how to be more effective at deploying AI for customer support situations.

The art of mathematical modeling

German Submarine Warfare in World War 1 I THE GREAT WAR Special - YouTubeAll this chatter about ChatGPT and large language models interests me, but from a slightly different perspective. You see, back in those pre-PC days when I was in grad school at Stanford, I was building mathematical models as part of getting my degree in Operations Research. You might not be familiar with this degree, but basically was applying various mathematical techniques to solving real-world problems. OR got its beginnings trying to find German submarines and aircraft in WWII, and then got popular for all sorts of industrial and commercial applications post-war. As a newly minted math undergrad, the field had a lot of appeal, and at its heart was building the right model.

Model building may bring up all sorts of memories of putting together plastic replicas of cars and ships and planes that one would buy in hobby stores. But the math models were a lot less tangible and required some careful thinking about the equations you chose and what assumptions you made, especially on the initial data set that you would to train the model.

Does this sound familiar? Yes, but then and now couldn’t be more different.

For my class, I recall the problems that we had to solve each week weren’t easy. One week we had to build a model to figure out which school in Palo Alto we would recommend closing, given declining enrollment across the district, a very touchy subject then and now. Another week we were looking at revising the standards for breast cancer screening: at what age and under what circumstances do you make these recommendations? These problems could take tens of hours to come up with a working (or not) model.

I spoke with Adam Borison, a former Stanford Engineering colleague who was involved in my math modeling class: “The problems we were addressing in the 1970s were dealing with novel situations, and figuring out what to do, rather than what we had to know built around judgment, not around data. Tasks like forecasting natural gas prices. There was a lot of talk about how to structure and draw conclusions from Bayesian belief nets which pre-dated the computing era. These techniques have been around for decades, but the big difference with today’s models is the huge increment in computing power and storage capacity that we have available. That is why today’s models are more data heavy, taking advantage of heuristics.”

Things started to change in the 1990s when Microsoft Excel introduced its “Solver” feature, which allowed you to run linear programming models. This was a big step, because prior to this we had to write the code ourselves, which was a painful and specialized process, and the basic foundation of my grad school classes. (On the Stanford OR faculty when I was there were George Danzig and Gerald Lieberman, the two guys that invented the basic techniques.) My first LP models were written on punched cards, which made them difficult to debug and change. A single typo in our code would prevent our programs from running. Once Excel became the basic building block of modeling, we had other tools such as Tableau that were designed from the ground up for data analysis and visualizations. This was another big step, because sometimes the visualizations showed flaws in your model, or suggested different approaches.

Another step forward with modeling was the era of big data, and one example with the Kaggle data science contests. These have been around for more than a decade and did a lot to stimulate interest in the modeling field. Participants are challenged to build models for a variety of commercial and social causes, such as working on Parkinson’s cures. Call it the gamification of modeling, something that was unthinkable back in the 1970s.

But now we have the rise of the chatbots, which have put math models front and center, for good and for bad. Borison and I are both somewhat hesitant about these kinds of models, because they aren’t necessarily about the analysis of data. Back in my Stanford days, we could fit all of our training data on a single sheet of paper, and that was probably being generous. With cloud storage, you can have a gazillion bytes of data that a model can consume in a matter of milliseconds, but trying to get a feel for that amount of data is tough to do. “Even using ChatGPT, you still have to develop engineering principles for your model,” says Borison.”And that is a very hard problem. The chatbots seem particularly well-suited to the modern fast fail ethos, where a modeler tries to quickly simulate something, and then revise it many times.” But that doesn’t mean that you have to be good at analysis, just making educated guesses or crafting the right prompts. Having a class in the “art of chatbot prompt crafting” doesn’t quite have the same ring to it.

Who knows where we will end up with these latest models? It is certainly a far cry from finding the German subs in the North Atlantic, or optimizing the shortest path for a delivery route, or the other things that OR folks did back in the day.

Skynet as evil chatbot

Building the Real Skynet - The New StackWhen we first thought about the plausible future of a real Skynet, many of us assumed it would take the form of a mainframe or room-sized computer that would be firing death rays to eliminate us puny humans. But now the concept has taken a much more insidious form as — a chatbot?

Don’t laugh. It could happen. AI-based chatbots have gotten so good, they are being used in clever ways: to write poems, songs, and TV scripts, to answering trivia questions and even writing computer code. An earlier version was great at penning Twitter-ready misinformation.

The latest version is called ChatGPT which is created by OpenAI and based on its autocomplete text generator GPT-3.5. One author turned it loose on trying to write a story pitch.Yikes!

The first skirmish happened recently over at Stack Overflow, a website that is used by coders to find answers to common programming problems. Trouble is, ChatGPT’s answers are so good that they at first blush seem right, but upon further analysis, they are wrong. Conspiracy theories abound. But for now, Stack Overflow has banned the bot from its forums. “ChatGPT makes it too easy for users to generate responses and flood the site with answers that seem correct at first glance but are often wrong on close examination,” according to this post over on The Verge. The site has been flooded by thousands of bot-generated answers, making it difficult for moderators to sift through them.

It may be time to welcome our new AI-based overlords.

Can AI help you get your next job?

There is an increasing number of AI-based tools that are being used in the hiring and HR process. I am not sure whether this is a benefit to job seekers and to the employment business. Certainly, there are plenty of horror stories, such as this selection from 2020’s most significant AI-based failures such as deepfake bots, biased predictions of pre-criminal intent, and so forth. (And this study by Pew is also worth reading.)

I would argue that AI has more of a PR than HR problem, with the mother lode being traced back to the Terminator movies and Minority Report, with Asimov’s Three Laws of Robotics thrown in for good measure. In a post that I did for Avast’s blog last fall, I examined some of the ethical and bias issues around AI. Part of the issue is that we still need to encode human judgment into some digital form. And people aren’t as consistent as machines — which sometimes is a useful thing. I will give you an example at the end of this post.

But let’s examine what is going on with HR-related AI. In a study done last year by HRExaminer, identified a dozen hiring-based AI tools, with half of them focusing on the recruiting function. I would urge you to examine this list and see if any of them are being used at your workplace, or as part of your own job search and hiring process.

One of the ones on the list is HiredScore, which offers an all-purpose HR solution using various AI methods to rank potential job candidates, recommend internal employees for open positions, and measure inclusion and diversity. That is a lot of places where the doomsday “Skynet” scenario of the machines taking over could happen, and is probably one of the few plot lines that Philip K. Dick never anticipated. Still, the company claims they have trained their machine learning algorithms with more than 25M resumes and twice as many job postings.

There are other niche products, such as Xref’s online reference checking or the testing prowess of TestGorilla. The latter offers a library of more than 135 “scientifically validated tests” for job-specific skills like coding or digital marketing, as well as more general skills like critical thinking. That one struck another nerve for me. The reason I put that phrase in quotes is because I can’t validate its claim.

As many of you who have followed my work have found out, my first job in publishing was working for PC Week when it was part of the Ziff Davis corporation. ZD had a rule that required every potential hire to submit to a personality test before getting a job offer. I have no recollection of the actual test questions all these years later, but obviously I passed and so began my writing career.

In the modern era, we now have vendors that use AI tools to help screen applicants.  I am not sure I would have passed these tests if ZD had them available back in the day. That doesn’t make me feel better about using AI to help assist in this process.

Let me give you a final example. When I went to visit my daughter last month, I was given a specific time period that I was allowed to enter Israel. Only it wasn’t specific: the approval was granted for “two weeks” but not starting from any specific time of day. I interpreted it one way. The German gate agents at Frankfort interpreted another way. Ultimately, the Israeli authorities at the airport agreed with my point of view and let me board my final flight. If a machine had screened me, I would have probably not been allowed to enter Israel.

In my post for Avast’s blog last year, I mention several issues surrounding bias: in the diversity of the programming team creating the algorithms, in understanding the difference between causation and correlation, and in interpreting the implied ethical standards of the actual algorithms themselves. These are all tricky issues, and made even more so when you are deciding on the fate of potential job applicants. Proceed with caution.

Avast blog: It ain’t easy to remove your personal data from the brokers

I tried to remove my own data recently and found it to be a very frustrating online rabbit hole. You will find either task to be nearly impossible and, sadly, this is by intent and by design: They charge by the gigabyte and aren’t paid for being accurate. And you don’t pay them anything, so you aren’t really the customer; you are just the unwilling victim. 

Note: these brokers are the legitimate side of selling your data, and not to be confused with the dark web illegal side, such as the recent scraping of 700M LinkedIn users. FIghting that is for another post.

I started out my own quest by submitting removal requests for my data to three places: Epsilon, Experian, and Intelius. I picked these somewhat at random, but the trio gives you a good idea of what you are in for. My journey through this looking glass is chronicled for my latest blog post for Avast here.