Updating the data structures for Interrogative

It’s all about the data

I have to admit that I had resisted the urge to change the tables in the database for a few months, mainly because I’m as lazy as anyone else, and all of my work so far had been done against the data structured as it was. However, I find it hard to think that people would like to buy an AI tool that asked them to add 20+ tables to their database, many of which had only a dozen or so lines of data in them. It was wasteful. And besides, there’s a few gains that get made with using 3-4 tables instead…

The way the data has changed- the boring part

The short answer here is that it really hasn’t, but it really has. The semantic data that gets pulled for all of the dialog and gameplay uses follows generally the same format, though it is divided up into two “silos”: Global data, and NPC data.

Global is where most of the more “static” game data would reside. Semantic data describing objects, events, items, etc., would go here. That data does not change all that much. A chair is still a chair, and when you create that item, you don’t need to recreate the data that goes with it, except any unique information, of course. Just link the chair with its information, and you’re good to go.

NPC data is just that. Semantic data that is dedicated to NPCs. This is volatile if your NPCs  die and spawn with information being generated on the fly, or pretty static if your game is more traditional with NPCs that either die and spawn again the same, or for NPCs that are more story-oriented. In those cases, your data isn’t moving a whole lot. Either way, the separation between the two silos seemed pretty logical.

Predicates and attributes (from here on out I’m just going to call them Attributes, to be consistent) are housed in their own table, and there is a table for “Core” data, which is mainly used by the editor to store information it needs to run. These two tables are very stable, and don’t get very big at all.

And then there’s two more tables, dealing with text data each for the Global and NPC data tables. Semantic data using text would point to text strings in these tables, along with a localization ID, and pull the correct, localized text for use. You could stick the localization ID column in the semantic data tables as well, but then you’re also duplicating the other data (and the IDs of the semantic data itself then change, which is a much bigger problem to solve). At least this way you can link to the information and localize it independently, and then just add a number and option for that language and it should all work seamlessly.

The not-so-boring parts

So now that we’ve condensed our data to a few tables, we’ve also gained some serious flexibility. Here’s a few things that happen now:

  • Attributes now come with Semantic Gradients: Attributes mean so much more than just Predicates being used as action words for dialog. Attributes describe everything, and because of that, you can do two things with that data.
    • Speak to the game: You can quantify the data to the game in a numeric fashion through enumeration. A texture in the game means much to the player, who understands what an ice texture means, but less to an NPC, who “looks” at the texture and sees nothing currently. Attributes of the object that the NPC looks at can tell the NPC that the object has a Slippery Attribute of 1.0, which on a scale of -1.0 to 1.0 is pretty slippery. The NPC can then use that number in its path finding calculations.
    • Speak to the player, through the game: That same NPC can describe the object to the player in another context as “Slippery” using what is called a Semantic Gradient. I discussed Semantic Gradients here, and talked about how they were used to describe the personality traits of the NPCs (which are described for the AI in terms of -1 to 1 scales). The traits have gradients assigned to them that the NPCs can access and, using fuzzy logic, pick the closest word that reflects the numeric value. Here, 1.0 is called “Slippery” in the Slippery Attribute’s Semantic Gradient, and that’s what’s used. Other Attributes that can be used with Semantic Gradients for NPCs (or other descriptive purposes) would be things such as Softness, Roughness, Hardness, Flexibility, Transparency, etc. Whatever you need, you can create an Attribute for, and assign a Semantic Gradient of adjectives to, and that is your enumerated vocabulary that the AI can use when telling the player, instead of using a number.
  • NPCs can store just about any Attribute you think up
    • Within the semantic data format, you can store strings of data that get parsed according to what kind of Attribute it is tagged to be. The BaseTraits Attribute will tell you that you’ve got a set of 17 floats to parse that make up the base traits of the NPC. You can create an Attribute that is a pointer to items for your inventory system, or other custom Attributes that get parsed in whatever way you need.
  • You can now have multiple sets of dialog templates!
    • I went over the ability to run your data through dialog templates, so that you can fill in the blanks and let the NPCs talk in specific ways. One of the limitations of this was that there was really just one set of templates pegged to the Attributes for dialog actions, which was rather limited. With the expanded range of Attributes, you can now assign the NPC a set of dialog templates, in whole or in part, so that you are not limited to one set of generic templates for all of your characters. Take that a step further, and you can create sets of templates for emotional states, situations, etc. It’s a far more flexible system, and replaces the need for marking up your templates to parse for “flavor text” or any additional processing that may or may not be worth it, from a content perspective. Your narrative designers will thank you for this.
  • Knowledge representation is customizable now
    • So, the prior data structure leaned heavily on the category-level way of categorizing the information, and that worked well. However, there are times when you want an NPC to know fragments of information in a category that do not correspond to one “level” of knowledge. The Object Knowledge Level tags the semantic data itself with a knowledge level, which you can then use to more quickly query the database. Using that column, you can also assign knowledge to the NPC at a more granular level when you need to, for NPCs that have an incomplete or uneven knowledge of a subject.
    • In addition to the above, you can also use the Attribute that allows for the category-level method, which is still fully supported. As before, you need to parse the string representation of the knowledge and sort it, but you can now use the above method first as a faster method before resorting to this, if you find this too generic or too slow.
    • But wait, there’s more! Use a custom Attribute to assign knowledge in a way that works best for you, then parse and use it as you see fit! Mix and match if you want. The knowledge representation field is a string, and so you can put whatever you like in there.

Closing thoughts

Heh, I actually didn’t think I had as much to talk about here. As you can imagine, the above changes have made for changes being needed in the tool itself, which is a good thing, because I’m now able to get rid of a few of the lists and condense things into a more user-friendly UI. Also being added is a dialog template editor, since that is now a bigger part of the tech now.

Having booked my flight and GDC tickets, I’m now available for meetings, so feel free to email me about contracts, licensing, and more. Demos will be available by GDC, and will also be posted here on the site.

Next Blog: Probably talking about the new editor look and feel, and the demo, as those are the main focus right now. Probably some more options I see possible with the new data structures, and I’ll probably outline some of the Attributes available to you when using this system.

Leave a Reply