I find more central office lore in Seattle

I have a thing about the telephone central office (CO). I love spotting them in the wild, giving me some sense of the vast connectedness that they represent, the legal wrangling that took place over their real estate, and their history in our telecommunications connectedness. That is a lot to pack into a series of structures, which is why I am attracted to them.

I wrote that six years ago and it still holds true. This week I added a new CO to my “collection” of favorite places, one that calls itself a connections museum that occupies the top floors of a working CO in an industrial area of Seattle. It is an interesting place, but the label isn’t quite appropriate: it is more an interactive time machine that will take you back over the past 100 years of telecom history. Yes, you will find working models of phones of yesteryear (such as this 1908 wall phone model shown here), but the real treat — especially for this networking geek — are the electromechanical switch fabrics that were once found in every CO on the planet, and now are extinct.

In the pre-TCPIP, analog landline days, every phone had to be connected via a slender pair of copper wires from one’s home (or business) to the CO. That is a lot of wire. Once that pair entered the CO premises, it was connected to these huge machines to make and receive phone calls. That is a lot of wire that is presently unused, such as what can be found in my own home. I think I used my last landline around 2002 or so.

What is both impressive and hard to comprehend in the Seattle CO is how enormous this equipment is, and how small the current digital switches and IP-based networking gear is by comparison. I have seen numerous mainframe computers and they are no small objects. But panel frames and crossbar switches loom large and have a distinctly oily smell, which gives away their mechanical nature. Even with all of its moving parts — and there are thousands of them — these beasts worked flawlessly for decades to place our phone calls.

The other thing that becomes clear walking around the Seattle CO is how extensible phone tech was. The phone network connected gear of many different vintages — there even were examples of those quirky 1960s-era video phones that we now carry around in our pockets and think nothing unusual of making such calls.

The place is an active test bed for the old phone tech, and numerous volunteers have devoted many hours to try to resurrect the tech into some operational semblance. That is quite an achievement, because the surviving documentation is incomplete or incomprehensible or both. Trial by error and patience are important skills to make this stuff come back to life.

Museum docents will take you around the CO, patiently explain what is going on, and show you the process of completing a phone call from one phone to another. There are also phone switchboards that Ernestine would be at home operating, and visitors can do the one-ringy-dingy themlseves.

One thing that I had forgotten about was the importance of real estate with these COs. Back in the 2000s, when DSL technology was coming into vogue, the local phone companies weren’t too happy about having some competition for their communications and tried to stop the DSL vendors from installing their gear in the COs. What became obvious as they attempted to create legal roadblocks was there was plenty of room for the new stuff. This is because as these old crossbar switches were replaced, there was plenty of floor space to hold a couple of 19-inch racks of digital gear. (BTW, that standard harks back to the 1890s.)

As I was leaving the CO, a tour group was coming in. The group was dressed up in what they called steam punk costumes (it looked more Dickensian to my untrained eye) but seemed very appropriate: people who understand the broad sweep of history and wanted to recall a bygone era. While I didn’t need any change of clothes, I recognized kindred spirits.

Building an unusual 30-year career in IT at the Catholic Health Association

Janey Brummett CAE PMPI had a chance to speak to Janey Brummett who has spent three decades working in the IT department for the Catholic Health Association, the national leadership organization of the Catholic health ministry, representing the largest nonprofit providers of health care services in the nation. She came to the association as a paralegal who got an early taste for computers, back when PCs were first coming into businesses and when she was helping to spec out mainframe systems. “I was the conduit to talk to the programmers back then,” she said. Over the years she worked her way up the IT org chart until retiring this year in a position that most of us would characterize as the CIO.

I recall those early years with a lot of fondness, as does Janey. Back then, we were pioneers in building local area networks that used very thick wires that was expensive to install. Wifi didn’t exist, and PCs had the massive 40 MB hard drives — well, they seemed massive at the time. Now you can’t even get that little memory in anything.

Those early LANs were running Novell Netware and Groupwise, an application that was an early collaborative tool that did email, shared calendars and documents.

The big switch came in the early 1990s for CHA when they went from DOS-based desktops to Windows. She had a major upgrade of their Netware server that was an all-nighter due to some data migration problems and access rights that didn’t transfer over. “That was a horrible experience,” as she recalls.

Now CHA is using Microsoft Copilot, and Teams to communicate, and they are developing their own AI-based tools to access a common data platform. “We are building a virtual data analyst that we can query and build charts and collect presentation talking points.” That is a sign of the times to be sure.

Janey remembers supporting a speaker at an annual association meeting in the early 2000s. “The speaker came to me a few minutes before their talk with a virus-infected floppy disk. That was typical of the times, and I sure am glad that systems have gotten a lot more stable and straightforward since then! Nowadays, there is more of a focus on end user tools and it all works really well.” I completely agree.

CHA was an early adopter of the internet, and Janey recalls teaching the first internal classes on how to use it in the mid-1990s. That was the same timeline for me (I started my Web Informant newsletters in the fall of 1995, BTW) and it was pretty exciting times to be sure.

“The pandemic years really changed our operations,” she told me. Back then, we had no one working remotely whatsoever. But we were fortunate to have put in place the infrastructure to support remote workers and had just started rolling out Teams. We had a lot of resistance before the pandemic, not to mention that less than half of our staff had laptops and we had to get that in place. Now we are almost all remote workers, with two or three days per month that people need to be in the office. Having Teams got us to jump light years ahead to collaborate to where it is second nature.”

How has she managed to stay at the same organization for all this time? “It comes down to constantly learning and innovating. Plus I enjoy what I do and my job is continually changing and evolving. IT should really stand for innovation technology.”

To read more interviews with long-standing IT managers, check out this three-part series that I wrote in the fall of 2022.

Can Movable Type become a useful AI writer’s tool?

Once upon a time, when blogs were just beginning to become A Thing, the company to watch was Six Apart. They have blogging software called Movable Type. Then the world shifted to WordPress, and soon there were other blogging platforms that turned Movable Type into the Asa Hutchinson of that particularly market. (What? They are still around? Yes and account for about one percent of all blogs.)

Well, Asa no more, because the company has fully embraced AI in a way that even Sports Illustrated (they recently fired their human writers) would envy. If you have never written a book, you can have a ready-made custom outline in a few minutes. All it takes is a prompt and a click. You don’t even have to have a fully-formed idea, understand the nature of research (either pre- or post-internet), or even know how to write word one. (There are other examples on their website if you want to check them out.)

MovableType’s AI creates “10 chapters spanning 150+ pages, and a whopping 35k+ words” (or so they say) of… basically gibberish. They of course characterize it somewhat differently, saying its AI output is “highly specific & well researched content,” It isn’t: there are no citations or links to the content. The output looks like a solid book-like product with chapters and sub-heads but is mostly vacuous drivel. The company claims it comes tuned to match your writing style, but again, I couldn’t find any evidence of that. And while “each chapter opens with a story designed to keep your readers engaged,” my interest waned after page 15 or so.

Perhaps this will appeal to some of you, especially those of you that haven’t yet written your own roman a clef. Or who are looking to turn your online bon mots into the next blockbuster book. But I don’t think so. Writing a book is hard work, and while it is not growing crops or working in a factory, you do have to know what you are doing. The labor involved helps you create a better book, and the process of editing your own work is a learned skill. I don’t think AI can provide any short cuts, other than to produce something subpar.

I have written three books the old fashioned way: by typing every word into Word. Two of them got published, one got shelved as the market for OS/2 moved into the cellar from the time of the book proposal. I got tired of rewriting it (several times!) for the next big movie moment of IBM’s beleaguered OS that never happened. The two published books never made much money for anyone. But I did learn how to write a non-fiction book, and more importantly, write an outline that was more of a roadmap and a strategy and structure document. This is not something that you can train AI to do, at least not yet.

When I read a book, I cherish the virtual bond between me and the author, whether I read my go-to mystery fiction or a how-to business epic. I want to bathe in the afterglow of what the author is telling me, through characters, plot points, anecdotes, and stories. That is inherently human, and something that the current AI models can’t (yet) do. While MovableType’s AI is an interesting experiment, I think it is a misplaced one.

Is it time to upgrade my hearing aid?

More than five years ago, I wrote about my journey acquiring my first hearing aid. With a new insurance plan with hearing benefits, I thought it was time to take another look and see where the latest aids can help me with my high frequency hearing loss and tinnitus.

Now there are three components of your hearing that will motivate you to get an aid. First, you have some kind of hearing loss (as I said, for me it is the higher frequencies, which is typical for older folks) and you want to hear things better. Second, the type of sounds you “hear” with your Tinnitus, and whether you want these masked with an aid or some combination of audio processing and masker. Tinnitus can vary by time of day, whether you have gotten enough sleep, or by stress levels. If this describes you, then ideally you want to be able to adjust the masking technology according to give you the greatest comfort. Part of the issue here is that you may not want to become a DIY audiologist or software engineer.
Finally, how the aids interact with each other to place you in a sonic environment so you can understand what is going on around you. Given that I am completely deaf in one ear and that I need only one aid, this isn’t relevant for me.
If you haven’t read my original blog post when I first bought my aid, now would be a good time to go do that to remind yourself of that process.

But buying the first aid may seen easy when you consider a complicated journey to upgrade your aid. This is because the hearing aid medical industrial complex is just that — complicated. And while there are professionals that can be helpful, you have to first know the right questions to ask, second know what the aids can and cannot do, and have a great deal of patience being the hearing-deprived patient. Oh, and be prepared to spend lots of time and money when you do get your aid.

You would think that already having an aid would mean that you have already dealt with these issues. But you would be wrong. The replacement market is truly a different ball game. This is because being human, our hearing changes as we age. And the aid technology marches on, which means your beforetime knowledge is outdated. And that you now have a standard — your existing aid — to compare things with introduces new complexities.

There is one other factor, that you can now purchase aids over the counter. That is fine if you have a simple hearing loss, don’t have to muck around with the frequency controls, and don’t have a lot of Tinnitus. Yes, no and no for me. So this wasn’t an option. The OTC aids generally cost less, but don’t include much in the way of hand-holding and servicing. This is not like buying a blender: instructions and personal demonstrations are essential to their operations. You might not like the initial fit of the instrument, or be confounded with its numerous settings.

The OTC aids don’t really give you the best price comparison either. When you buy an aid from an audiologist, you are actually paying for a service contract for a period of time and this contract may or may not cover all problems or has exceptions (like water damage).

As I mentioned previously, for the past five plus years I have been using one of the Starkey models. I was generally happy with it. I made an appointment with a different audiologist than the one I had been seeing for the Starkey, just to see how the two approach solving my problems. I give the new audiologist, a woman whom I will call B, props in thoroughness, and knowledge, and service. She spent nearly two hours on my first visit, and wanted to schedule several follow up visits. She told me that I am her first Tinnitus patient who is not a new hearing aid user. We will get to why that matters shortly.

B has worn aids since childhood, a perspective of which I liked. She had some very fancy gear to test your aid’s programming, which I liked as well. Think about the device that an optometrist uses to determine if your glasses prescription matches the actual optics. She puts every aid she sells through this device, to ensure it is programmed properly.
Why is this important? Modern hearing aids are more software than hardware and can be programmed in one of three ways: At the factory when they are assembled and make use of various automatic sound processing features (to soften noisy environments, to enhance the frequencies used in human speech, to change the microphone coverage area to the sides or in front of you, and other things).
But one piece of programming is very important to me, and that includes figuring out the Tinnitus masking sounds, which can be tricky to deal with. Everyone has different “ringing” sounds as part of their Tinnitus affliction, some relatively simple sounds (such as I have) and some that vary in terms of frequency, period, and loudness. The aids do some counter-programming — meaning they produce their own sounds — to try to keep your attention away from the tinnitus, at least that is the working theory of the moment.
Second, the audiologist has the ability to change some of its programming that affects its audio processing and also set up pre-set conditions that you can access with the buttons on the aid or on its smartphone app. I mentioned to B that one night my wife accidentally slammed a cabinet door — the sound of which, amplified by my new aid, almost made me jump. She told me that she could program the “door slam sound” (yes, this is a thing) to soften it up.
Yes, there is an app for that, and the different vendors do a varying job on their apps. The app — the third piece in this puzzle — is what you fiddle with yourself (assuming you have a smartphone and that you want to do this). Each aid manufacturer has different models with varying features — which you may or may not need. Tinnitus masking generally is included in the higher end (and pricier) products, just my luck.
One of the things that I will be doing over the next week or so is to try them out in different sonic environments and see if one of them is worth the cost of the new aid, or I can still get by with the old aid with a few simple adjustments.
So B recommended that I check out the Resound models. She likes their app, which has a lot of controls, as you can see from the screenshots. This was very obvious when I compared it to the app that controls my old Starkey aid, which doesn’t have as many Tinnitus pre-set controls either available to the audiologist or on the app itself.
I should mention one other complicating factor. My old aid had regular batteries that needed replacement every week or so. But most of the newer aids have rechargeable batteries that last about a day on a charge and probably need complete replacement every three years. I don’t mind the non-rechargeable kind but you may feel otherwise.
Then there is the matter of insurance. My insurance plan covers $2000 per aid, but this is a deceptive situation, as some audiologists don’t take insurance (such as B) because they don’t want to bother with the reimbursement. But she is very upfront about the services she provides to make sure your aids are the right choice for you, and includes several visits the first year you buy an aid from her.
The insurance issue is a vexing one. For years, my plans had no coverage, so it was easier. There are three problems: first, as you say, limited reimbursement on part of the costs, which still leaves them expensive. Second, a general lack of transparency about pricing. Each audiologist can set prices independent of others, which as I mentioned bundles in a certain level of service. This makes it hard to shop around. Selling OTC aids was supposed to make things more transparent, but it hasn’t.
So let’s take a step back here. The issue for the aids is what problem are you trying to solve. For example, I would like better Bluetooth fidelity (especially when I am outside, which renders my old aid almost useless), better control over the masker, and more options in general for the smartphone app. Not all aids deliver all of these features. And sometimes you don’t know what is important until you try out the aid and hear it for yourself. Or see the app’s controls and decide whether you want to spend your day fiddling with them.
After spending a few days with my new Resound aid, I decided I would stick with the tried and true Starkey. Much as I would like the latest and greatest tech, I just couldn’t justify the extra bucks. Perhaps when it finally bites the dust that will force my decision, but by then there will be something even more techy.

A brief history of domain squatting

A long time ago at the dawn of the internet era, a tech journalist bought the mcdonalds.com domain name. Actually, “bought” isn’t really correct, because back then in order to obtain a dot com, you merely had to know how to send email to the right destination, and within days, the domain was all yours. That was how I first got my own strom.com domain back in 1993. Free of charge, even. It was the wild west. (Some may say it still is.)

The journalist was Josh Quittner, who was writing a story for Wired magazine about domain squatting, although it wasn’t glorified with an actual name back then. Josh noticed that the name wasn’t yet taken, so he tried to do the responsible thing and called a PR person at McD’s to try to figure out why they weren’t online and hadn’t yet grabbed it. Of course, back then, almost no one had gotten their names — Burger King didn’t yet obtain their own domain name, btw.

The PR person, bless their heart, asked, “Are you finding that the Internet is a big thing?” Yeah, kinda. As he wrote, “There are no rules that would prohibit you from owning a bitchin’ corporate name, trademarked or not.” So he grabbed the domain and refused to turn it over until McDonald’s agreed to provide high-speed Internet access for a public school in Brooklyn. Eventually, the company figured out that they really wanted the domain for their own business, and domain squatting has never been the same since then.

Domain squatting now has a wide and varied subculture. Here is a 2020 report from Unit42 that goes into further details, for example.This includes homographic attacks (using non-Roman character sets), combo squats (that add subdomains to make them appear more legit), level squats (using a very long character string, counting on the browsers to truncate them and make them more believable) and I am sure many more perfidious techniques.

A complicating factor is we now have all kinds of domains like .xyz and .lawyer to contend with, which only increases the threat space that bad actors can occupy in domain impersonation. Josh emailed me today and said, “I figured that with the creation of so many top-level domains the shenanigans around domain-name squatting would abate but it just created loads of new problems. For instance, some scammers pretending to be decrypt.co (my crypto news site) created a mirror site with a very similar name and used it for phishing. They periodically send out email to millions of people claiming to be decrypt and urge people to connect their crypto wallets to collect tokens. Emailing the host site and alerting them to the scam did no good.” Yeah, wild west indeed.

I was reminded of this story when I saw that yet another business had let their domain registration lapse and was purchased by tech consultants who are trying to give it back to its rightful owner. Why do these things happen? One major reason: domain ownership ultimately relies on humans to pay attention, and renew them at the right times. (I just renewed a bunch of mine, which I had wisely setup years ago to expire on January 1.) Also, in big companies like McDonalds there may be several different domain owners spread among various departments, especially if a company has been acquired or has created new subsidiaries.

Actually, there is a second reason: greed. Criminals have adopted these squatting techniques to lure victims in. Just a few days ago I thought I was buying some stamps from USPS.com, but was brought to some other domain that looked like them. You can’t be too careful. (And the USPS doesn’t discount their stamps by 40%, which should be a red flag.)

Book review: Micah Lee’s Hacks Leaks and Revelations

There has been a lot written about data leaks and the information contained therein, but few books that tell you how to do it yourself. That is the subject of Hacks, Leaks and Revelations that was recently published.

This is a very unique and interesting and informative book, written by Micah Lee, who is the director of information security for The Intercept and has written numerous stories about leaked data over the years, including a dozen articles on some of the contents of the Snowden NSA files. What is unique is that Lee will teach you the skills and techniques that he used to investigate these datasets, and readers can follow along and do their own analysis with this data and others such as emails from the far-right group Oath Keepers. There is also materials leaked from the Heritage Foundation, and chat logs from the Russian ransomware group Conti. This is a book for budding data journalists, as well as for infosec specialists who are trying to harden their data infrastructure and prevent future leaks from happening.

Many of these databases can be found on DDoSecrets, the organization that arose from the ashes of WikiLeaks and where Lee is an adviser.

Lee’s book is also unique in that he starts off his journey with ways that readers can protect their own privacy, and that of potential data sources, as well as ways to verify that the data is authentic, something that even many experienced journalists might want to brush up on. “Because so much of source protection is beyond your control, it’s important to focus on the handful of things that aren’t.” This includes deleting records of interviews, any cloud-based data or local browsing history for example. “You don’t want to end up being a pawn in someone else’s information warfare,” he cautions. He spends time explaining what not to publish or how to redact the data, using his own experience with some very sensitive sources.

One of the interesting facts that I never spent much time thinking about before reading Lee’s book is that while it is illegal to break into a website and steal data, it is perfectly legal for anyone to make a copy of that data once it has been made public and do your own investigation.

Another reason to read Lee’s book is that there is so much practical how-to information, explained in simple step-by-step terms that even computer neophytes can quickly implement them. Each chapter has a series of exercises, split out by operating system, with directions. A good part of the book dives into the command line interface of Windows, Mac and Linux, and how to harness the power of these built-in tools.

Along the way you’ll learn Python scripting to automate the various analytical tasks and use some of his own custom tools that he and his colleagues have made freely available. Automation — and the resulting data visualization — are both key, because the alternative is very tedious examination line by line of the data. He uses the example of searching the BlueLeaks data for “antifa” as an example (this is a collection of data from various law enforcement websites that document misconduct), making things very real. There are other tools such as Signal, an encrypted messaging app, and using BitTorrent. There is also advice on using disk encryption tools and password managers. Lee explains how they work and how he used them in his own data explorations.

One chapter goes into details about how to read other people’s email, which is a popular activity with stolen data.

The book ends with a series of case studies taken from his own reporting, showing how he conducted his investigations, what code he wrote and what he discovered. The cases include leaks from neo-Nazi chat logs, the anti-vax misinformation group America’s Frontline Doctors and videos leaked from the social media site Parler that were used during one of Trump’s impeachment trials. Do you detect a common thread here? These case studies show how hard data analysis is, but they also walk you through Lee’s processes and tools to illustrate its power as well.

Lee’s book is really the syllabus for a graduate-level course in data journalism, and should be a handy reference for beginners and more experienced readers. If you are a software developer, most of his advice and examples will be familiar. But if you are an ordinary computer user, you can quickly gain a lot of knowledge and see how one tool works with another to build an investigation. As Lee says, “I hope you’ll use your skills to discover and publish secret revelations, and to make a positive impact on the world while you’re at it.”

SiliconANGLE: The changing economics of open-source software

The world of open-source software is about to go through another tectonic change. But unlike earlier changes brought about by corporate acquisitions, this time it’s thanks to the growing series of tech layoffs. The layoffs will certainly change the balance of power between large and small software vendors, and between free and commercial software versions, and the role played by OSS in enterprise software applications could change.

In this post for SilicionANGLE, I talk about why these changes are important and what enterprise software managers should take away from the situation.

 

SiliconANGLE: Here are the major security threats and trends for 2024 – and how to deal with them

What a year 2023 was for cybersecurity!

It was a year the world became obsessed with generative artificial intelligence — and a year that brought new breaches with old exploits, a year that brought significant consolidation in the security tools marketplace, and a year when passkeys finally took hold, at least for consumers.

Are businesses better secured than before? Hardly. Attackers have continued to get more sophisticated, hiding in plain sight and using sneakier ways to penetrate enterprise networks. Ransomware is still a thing, and criminals are getting clever at using multiple tactics to extort funds from their victims.

In this story for SiliconANGLE, I’ve has collected some of the more notable predictions for 2024, and offer my own recommendations for best security practices.

I know it when I see the URL

The phrase in today’s screed is adapted from an infamous 1964 Supreme Court decision, when Potter Stewart was asked to define obscenity. In a report issued today by Stanford researchers, the new phrase (when referring to materials relating to abused children) has to do with recognizing a URL. And if you thought Stewart’s phrase made it hard to create appropriate legal tests, we are in for an even harder time when it comes to figuring out how to prevent this in the age of GenAI and machine learning. Let me explain.

If you are trying to do research into what is called child abuse materials (abbreviated CSAM, and you can figure out the missing word on your own), you have a couple of problems. Firstly, you can’t actually download the materials to your own computer, not unless you work for law enforcement and do the research under conditions akin to when intelligence operatives are in a secured facility (now made infamous and called a SCIF).

This brings me to my second point. Since you can’t examine the actual images, what you are looking at are bunches of URLs that point to the actual images. URLs are also used instead of the actual images because of copyright restrictions. And that means looking at metadata, which could be in a variety of languages, because let’s face it, CSAM knows no geographic boundaries. The images are found by sending out what is called “crawlers” that examine every web page they can find at a point in time.

Next, and this comes as no surprise to anyone who has spent at least one minute browsing the web, there is a lot of CSAM out there: billions of files as it turns out. The Stanford report found more than three thousand suspected images. Now that doesn’t seem like a lot, but when you consider that they probably didn’t catch most of it (they acknowledge a severe undercounting), it is still somewhat depressing.

Also, we (and by that, I mean most everyone in the world) are too late to try to prevent this stuff from being disseminated. That is a more complicated explanation and has to do with the way the GenAI and mathematical models have been constructed. The optimum time to have done this would have been, oh, two or more years ago, back before AI became popular.

The main villain in this drama is something called the large-scale AI open network or LAION-5B model, which contains 5.85B data elements, half of which are in English. This is the data that is being used to train the more popular AI tools, such as Stable Diffusion, Google’s Imagen, and others.

The Stanford paper lays out the problems in examining LAION and the methodology and tools that they used to find the CSAM images. They found that anyone using this model has thousands of illegal images in their possession. “It is present in several prominent ML training data sets,” they state. While this is a semi-academic research paper, it is notable in that they provide some mitigation measures to remove this material. There are various steps that are mostly ineffective and some that are difficult to pull off, especially if your goal is to remove the material entirely from the model itself. I won’t get into the details here, but there is one conclusion that is most troubling:

The GenAI models are good at creating content, right? This means we can take a prototype CSAM image and have it riff on creating various versions, using say the face of the same child in a variety of poses for example. I am sure that is being done in one forum or another, which make me sick.

Finally, there is the problem of timing. “if CSAM content was uploaded to a platform in 2018 and then added to a hash database in 2020, that content could potentially stay on the platform undetected for an extended period of time.” You have to update your data, which is certainly being done but there is no guarantee that your AI is using the most recent data, which has been well documented. The researchers recommend better reporting feedback mechanisms when a user finds these materials by Hugging Face and others. UC Berkeley professor Hany Farid wrote about this issue a few years ago, and said these materials “should not be so easily found online. Neither human moderation nor AI is currently able to contend with the spread of harmful content.”

Nicki’s CWE blog: Meet me at the Berlin Hotel

Even long-time Central West Enders in St. Louis might not recognize Berlin Avenue, but the street has a storied past in our neighborhood. It is now called Pershing Avenue, and the corner of Pershing and Euclid now has a commemorative plaque that hints at its history. In a post for Nicki’s blog, I take a walk back in time to show what happened on this little corner of our city.