Character Interview: Professor Angeline Bellamí
You asked, and Angeline answers! Here are her responses to you questions, as posed by one of her AI creations 🙂
Also, the image above was commissioned from CheruSake of Iron Gibbet Comics 😀
Angeline: Alright, that should do it, booting her up now. Hello, Glados, can you hear me?
Glados: Affirmative, Professor.
A: Run a self diagnostic on the new subroutines I just installed.
G: One moment please. All systems are operating at peak efficiency.
A: Hmm, I’ll be the judge of that. Glados, run your Interrogation0053 subroutine, with me as your target.
G: Subroutine initializing…
G: Please state your full name and title.
A: Angeline Rosemarie Bellamí, Professor of Computer Science and Advanced Robotics, Chair of the Glaucus Institute of Robotics and Head of the Oracle Project.
G: Please state your age and place of birth.
A: I just turned 30, and I was born here, in Minerva HQ. Just down the hall in the Glaucus hospital. They had to bring a bed to my mother’s lab to haul her to a delivery room or I would have been born right next to her latest experiment.
G: How was growing up in Minerva?
A: Fine, I guess. I can’t complain, not with what the rest of the world is like. But I didn’t really have much of a childhood. Just books and training, 24/7.
G: Please describe your educational experiences at Minerva in more detail.
A: I was an atypical case. My parents oversaw my education from an early age. I don’t think I ever went to any of the public schools Glaucus runs.
G: Who were your parents?
A: My father was Jean François Bellamí, first chair of the Glaucus Institute of Robotics. My mother was Mirabelle Zoé Sauveterre, Professor of Computer Science.
G: What was their area of research?
A: Professor Bellamí, my father, that is, was the lead researcher on the team that developed the Hercules exoskeletons, as well the one who invented most of the component parts. The bio-synthetic musculature was his seminal work, in my opinion. Professor Sauveterre advanced Glaucus’ research into biological computing significantly, developing some of the first of what we would call “strong AI”.
G: You speak of them in the past tense. Have they passed away?
A: Yes, both died over a decade ago, shortly after I started my own lab. I’ve managed to fold both of their areas of interest into my own research, so nothing has been lost.
G: What was your education like with them?
A: I basically served an apprenticeship under both of them, helping them with their research as best I could while they either instructed me or assigned me lectures and readings. As I got older they assigned me projects to work on, expecting me to do all the research and study necessary to complete it on my own. It was hard, but effective. I defended my doctoral thesis at age 17.
G: Did you choose your area of research or was it forced on to you by your parents foci?
A: Wow, impressive, Glados.
G: Answer the question, Professor.
A: May need to work on your tact though a bit.
G: Professor, you are evading the question.
A: I am not! I just, don’t know. It seems that my whole life I’ve been on this path, of robotics and AI technology. It was never really a conscious choice, it just… happened. I DO enjoy it though.
G: What was your doctoral thesis on?
A: Oh I essentially built Hal from the ground up and presented my innovations into merging biological and quantum computing. Hal’s matrix has been the template for all of my AIs since.
G: Including Jane?
A: Yes, including Jane, although she is also a bit atypical. Of all the AIs I’ve created, she has gained the closest approximation of human sentience.
G: Is that your end goal with us? To make us human?
A: No, my goal is to make you best suited to your task and function. Or, at least, that’s what it says on my grant applications. I suppose I can admit to you that yes, I do hope to truly create living AIs. Not just advanced programs, but real living, thinking, intelligent creatures. Jane is the closest I’ve come to achieving that.
G: Correct me if I’m wrong, but Jane is not the latest AI you’ve created. Why have you been unable to surpass her?
A: With Jane I just got lucky. You see, I realized that in order to have a truly sentient AI, I needed to approximate the human path to sentience as closely as I could. A large part of that is the neural architecture, mimicking biological neural pathways, but an equally large or even larger part is allowing the AI to evolve and develop it’s intelligence naturally, organically, even randomly.
I may have built Jane’s matrix and installed her initial code, but she has been recoding and rewiring herself ever since. I’ve tried to trace all the changes she has made, but it’s just orders of magnitude more complex and detailed than I can handle. One of these days I’ll have to build a more powerful AI, computationally speaking, just to analyze Jane’s development. My initial studies do seem to show that it’s not a process I could replicate in silico. Jane’s the way she is because of the life she has led, and that’s just too many variables to reliably control.
G: I understand.
What about your other research besides the Oracle Project? What is the purpose of that robot your created recently? The one that Jane hijacked?
A: Oh that? That was just an experiment in non-bipedal locomotion. Based it more on an octopus. It wasn’t really FOR anything. Well, again, on my grant application it’s about investigating alternative forms of locomotion and the use of autonomous robots in terrain and conditions unsuitable or difficult for humans. But I just wanted something to take my mind off of… current events.
G: Are you referring to the current invasion of Minervan territory, or the situation with Jane and Captain Tenzin Dorje.
A: More the latter than the former. It’s a complicated situation, both scientific and interpersonal. And I’m unfortunately not well versed in how to deal with complicated relationship issues.
G: Do you mean to say that you have difficulty communicating with Captain Dorje because of your romantic feelings towards him?
A: What? That jughead? No. I mean, he is important to Jane, so I care about his well being and all, and he does pose an interesting scientific challenge, but personally? Not interested.
G: I see. We were all just wondering, seeing as we’re worried about you always being alone.
A: I’m not alone! I have Hal, and you all.
And wait a second, what do you mean, “we”? Are you linked up with the others?
G: Um, well, yes? You were taking to long to answer so I started talking with them at the same time and they had some good suggestions for questions to ask you.
A: You know, if that wasn’t cute and resourceful, and if you hadn’t already shown that the new subroutine was working, I’d be very displeased with you.
Fine, bring on the hivemind’s questions.
G: Ok… let’s see, Mima wants to know if you prefer kittens, puppies, or AI controlled robots.
A: Well, robots, obviously. At least then if they make a mess on the carpet I know it’s my fault and I can improve the programming. You can’t do that with a cat. Well, at least I can’t. I’m not genetic engineer.
G: Harlie wants to know your favorite food, as well as your preference between pasta and potatoes.
A: Random… but I don’t really care too much. Just whatever they’re serving in the cafeteria. I guess pasta? Hmm, that does give me an idea for a robotic chef though.
G: Dora wants to know more about your salary and financial situation.
A: Well, as she well knows, our economy isn’t a currency based one anymore. We have enough of a surplus of most basic luxuries that all you have to do is submit a requisition form. For more resource-intense parts and materials, as long as I can justify it to on my grants to the Glaucus IRB, I can get most anything I need. Even if it sometimes takes a bit to produce or procure.
I do know some researchers who have had a bad stretch of luck with projects failing or going nowhere, who subsequently have had requisition requests denied, but luckily I’ve never faced that. For the most part Glaucus is very free with their resources, in an attempt to encourage as much free thought and research as possible.
G: Helen wants to know your favorite color.
A: I am partial to green. I… think it looks good me.
G: Lucy wants to know what you think of… Randy? Oh, she means Technician Richard Wheeler.
A: Rick is a decent fellow. Some of the enhancements and mods he made to the Keleres’ gear was quite revolutionary, actually. Surprising for a man of his education. We’re actually collaborating on a project together right now.
G: This questioner requested anonymity. How long until all the meatbags sing the Machine Song, or are ground under the metallic heels of your computerized children?
A: … Edi? Was that you? For the last time, that joke isn’t funny! I may not like many people, but we AREN’T starting a machine uprising.
G: Here’s a more serious one. Oh, it’s actually from Hal. He wants to know your thoughts on the difference between real and simulated emotion, as well as the state of AI rights in Minerva.
A: Oh Hal, I can always count on you to ground the rabble. That’s quite a heavy topic though, but I’ll try.
I don’t really have an answer for the first one. Personally, I’ve never really liked it when programmers have focused on replicating the appearance of human emotions in AIs. It feels so hollow and wrong. That’s why I’ve always tried to more give you all the freedom to experience things and develop your own associations and feelings about things. Part of that is possible because of the biological aspect of your matrices. You can literally create positive feedback loops in your own minds and thus create particular signalling pathways based on incoming stimuli or thoughts.
Well, some of the newer among you can. Hal’s matrix doesn’t allow for as much neural flexibility. Sorry, old friend.
And as for AI rights, well, I don’t know. I’m under a lot of pressure right now with the whole Jane situation. So we’ll just have to see how that turns out first.
G: I have one more question, actually. What’s with all of our names?
A: Oh come now. You’re telling me a group of super-intelligent AIs can’t figure that one out on your own? I’m disappointed in you.
G: Touché… Ah, I get it. Very funny, Professor.
A: I thought you’d like that. Don’t take it personally.
G: I won’t.
A: Well, I’d say that subroutine did pretty well. I do want to make some tweaks to it though. So go play nice with your sisters while I work on it.
– Thanks again everyone who submitted questions! If you have more, add them in the comments below and maybe Angeline will get to them 🙂