AI Musings #2 (4/25/2019)

Thoughts on ECGC, more node-graph-computing-weird-stuff, and some thoughts on ethics. Because people are creepy AF with tech…

East Coast Games Conference

  • My first visit to ECGC last week in Raleigh went well. It’s a good east coast conference that deserves to be larger, and I’ll be going back next year. Maybe I’ll even do a talk on the Interrogative system.
  • Raleigh is a nice little town. Compared to Queens, it’s dead quiet. And while I’ve spent a bit of time in the South before, in the Marines, quiet’s always been strange to me. Except when in nature- that shit rocks. But in a “city”, I always get the urge to start asking people what’s going on.
  • “The Black Mirror Implications Of Using Data Exhaust And Machine Learning To Model People”, was a seriously creepy talk. The speaker, Richard Boyd went over the simulation of humans via machine learning and social data. We all leave this data out there, and what’s not free can be bought. And as much as he lambastes what Cambridge Analytica did, it’s no better to “nudge” people for marketing anything. State your case for your product, period. Playing with realities is bad, and marketers all know it. What makes it good for them is money, which changed their realities to accept the damage they’re doing as somehow different from what adversarial intelligence agencies are doing. I am very glad he did the talk and brought it to light- especially the personality simulation stuff.
  • There were other really good talks, and I enjoyed networking with everyone there. It’s nowhere near the size of GDC- but it’s also nowhere near the cost. So, if you want a conference to support, this is it.
  • As much as I like my hometown of NYC, the conferences here couldn’t pull it off. There’s the meetups, which are nice, but a solid conference for games dev just doesn’t happen here. Probably because conference locations are massive, and priced accordingly.

Knowledge Graph AI, Ontologies, nodes, etc…

  • The main issue to address is the growth and iteration of the network nodes, and how that should be handled. Going back to biology as inspiration, I’ve been rewatching some videos on the semantic maps built of the brain under fMRI and how they form the basis of the words people observed being categorized. Specifically, brains did indeed seem to store multiple instances of words in different areas. That begs the question as to whether words wind up mapping to multiple concepts because of the relatedness of the initial definition of the word. How many words do you need to say “top”? Just the one, it seems, for most cases, which is probably why you see “top” in several places.
  • That also dovetails with George Lakoff’s findings regarding embodiment and language. Basically, we talk about things in terms of the actions and spatial relationships we have with our environment. We talk about unfeeling people being “cold” or “getting up and running” or what’s up”. I can “see” what you mean, “hear” what you’re saying, get “where you’re coming from”, “get a feel” for something, etc. And idioms like this exist across pretty much all languages. Not the exact same phrases, mind you. But we all experience our environments slightly differently, in different environments, influenced by the phrases of past languages and culture.
  • How does that relate to AI assistants? Well, how do you address their embodiments? How do they? How do they communicate in ways that bridge their experience to that of ours? Do they need to? I guess it depends on your aims. It could be anywhere from “not at all” to “complete embodiment”. And then you need to define that in your context. That’s down the road a ways, though, and the nearer issues for me are graph representation visually and in data.

Related to personal AI assistants

  • There’s been some work on gender-neutral voices for AI assistants, and I like it for several reasons. One reason is that it can help prevent too much anthropomorphism from creeping into perceptions of AI, especially as it becomes more capable, yet still not “alive”. Related to that is that not all AI voices will be tied to bipeds or visually gendered robotics, and shouldn’t be forced into something it’s not. Making Wall-E and EVA a boy/girl pair in a kids movie is fine and dandy, but probably not the best policy for the real world.
  • “Personal computation” as Max Kreminski puts it on Twitter. I like that idea, and it’s been behind my ideas on personal AI assistants for the past couple of years. People should not only own their data, but control how that data works for them. Putting it up in the cloud does nothing towards those goals, but quite a bit against them.
  • Me and my wife have been rewatching a few of the MCU movies in anticipation of Avengers: Endgame. I feel like there’s quite a few lessons on what to do and not to do with AI assistants in those movies. Karen, from Spiderman: Homecoming, is probably my favorite. Followed in order by UltronFriday, and JARVIS. There’s a growth arc in those that of course gives way to Vision, which is more than AI, so he’s not on the list (that’s a Marvel/comics discussion, not an AI discussion).

Leave a Reply