Wednesday, October 16, 2019

Translation in PHP using class constants

Translation in PHP frameworks is done in several different ways, but in all of them (?) the source text is a string. The translations are stored in CSV files, .mo files or in PHP arrays.

I propose to store them in class constants. This makes loading translations faster and creates a better integration with the IDE.

Translation classes

In this proposal a translation source file looks like this:

class CatalogTranslator extends Translator
{
    const price = "Price";
    const name = "Name";
    const nTimes = "%s times";
}

Each module has its own source file. This one belongs to the module Catalog. Note that we now have a source (i.e. nTimes) and a default translation ("%s times"in English, in this application).

The translation file (for the main German locale) is like this:
class CatalogTranslator_de_DE extends CatalogTranslator
{
    const price = "Preis";
    const nTimes = "%s Mal";
}
Note that the translation file extends the source file. This translation here has two translations, but lacks a translation for "name". That's ok; the value is inherited from its base class.

Translation calls

A function that needs translation functionality starts by creating an instance of a translation class for this module:
$t = CatalogTranslator::resolve();
And here is an example of an actual translation call:
$name->setLabel($t::name);
There it is: $t::name. It is a constant from a class that was resolved by the CatalogTranslator::resolve() call. When the request's current locale is de_DE, the class will be CatalogTranslator_de_DE; when the locale is fr_FR it will be CatalogTranslator_fr_FR. Which of the classes is selected is determined by the resolution function resolve.

Translator class resolution

Did you notice that CatalogTranslator extends the class Translator? This base class contains the static resolve function that determines the runtime translation class:
class Translator
{
    /**
     * @return $this
     */
    public static function resolve() {
        return $xyz->getTranslator(static::class);
    }
}
The function returns $this, which tells the IDE that an object of class CatalogTranslator is returned. A subclass of it, in fact.

$xyz stands for some object in your framework that defines the getTranslator function. Important is only that the runtime class (static::class) is passed to determine the name of the active translation class.

The function getTranslator looks like this:
class $Xyz
{
    protected $translatorClasses = [];

    public function getTranslator(string $class): Translator
    {
        if (!array_key_exists($class, $this->translatorClasses)) {
            $locale = $xyz->getLocale();
            $translatorClass = $class . '_' . $locale;
            $this->translatorClasses[$class] = new $translatorClass();
        }
        return $this->translatorClasses[$class];
    }
}
As you can see the resolver fetches the active locale from the request (again your framework will have a different approach) and simply append it to the runtime classname of the translation:

CatalogTranslator + de_DE = CatalogTranslator_de_DE

Advantages and disadvantages

An advantage of this approach is that the translations do not need to be loaded in memory each new request. They already reside in the opcache. Another is that translations are clearly scoped to the module they are defined in. No naming conflicts with other modules. Further, this approach is IDE friendly. Just control-click on the constant and you are in the source file. In this file you can find out where this source text is used (Find Usages) and which translations there are (Is overridden) using standard IDE functionality.

Disadvantages: the use of PHP constants to store translations will not be familiar to developers and translation agencies. It is easier to just type a source text directly in source code than to have to define a constant. Also, module-scoped translations mean that you may have to translate the same text several times in different modules. But if this is your only concern, you can adapt the approach into a single set of system-wide translation classes.



Wednesday, May 15, 2019

Natural language interaction in flowcharts

Flowchart

In this post I will sketch some of my ideas about a generic natural language interaction program in the form of a flowchart.

Natural language interaction is any form of computer program that allows you to interact with it in a complex form of natural language (i.e. English, Chinese). It is a subject I have been working on for several years.

A flowchart visualizes the flow of control of a program. It consists mainly of interconnected tasks, but I added intermediate data structures (in blue) because of their special relevance.

What you're about to see is an attempt to unify the aspects of many historical nli-systems in a single system. It's still very sketchy, but I needed to have these thoughts visualized, because it allows me to have a clearer goal.

Check this wikipedia article if you're not sure about some of the flowchart symbols.

The main loop

The main loop of the program exists (as can be expected) of a sequence of asking for input, processing it, and displaying the output.

I split the processing part into analyzing and understanding. By analyzing I mean processing the textual string into a logical, context insensitive data structure (think predicate calculus). By understanding I mean processing this logical structure into a domain-specific representation, a speech act.

Both the analysis and understanding part may fail. Analysis is language dependent. If the program supports multiple languages and analysis in one language fails, the other languages may be tried. The language that succeeds will become the default language for the next input sentence. Understanding is domain specific. If the system supports multiple domains, and understanding fails for one domain, other domains may be tried.




The parts that look like this are called predefined processes. They are placeholders for other flowcharts, show below.


Analyse input

The analysis of input forms the standard pipeline of natural language processing.



Tokenization splits a string into words. Parsing creates a parse tree. Semantic analysis maps the forms to meaning. Quantifier scoping creates nested structures for quantifiers like "all". Pragmatic analysis fills in the variables like "this", "her".

The analysis phase needs information in the form of a lexicon (word forms), a grammar (rewrite rules) and semantic attachments (from word form and phrase to meaning structure).

Ask user

Any nontrivial system sometimes needs to ask the user to disambiguate between several possible interpretations. To do this, it is sensible to present the user with a multiple choice question (i.e. not an open question). This way, any answer the user gives is OK and will not cause a new problem in itself.



The added idea here is that a user should not be asked the same question more than once in a session. So any answer he/she gives is stored for possible later use. The store, "dialog context" may be used for pragmatic variables as well. The active subject of a conversation for example, that may be referred to as "he", "she" or "it", might well be stored here.

Speech act processing

This is the central component of an NLI system. The input is a logical structure that has been formatted in a way that makes sense in a certain domain. The term "speech act" fits this description well. The system then looks up a "solution", a way of handling, for this speech act. This solution consists of a goal (which is usually just the contents of the input, in a procedural form), and a way of presenting the results (it depends on the type of speech act and even per type of question). The latter one thus forms a goal in itself.



The red square is used for internal data structures. The one shown here lists a number of available solutions.

Process procedure

The assumption in this blog is that the internal processing of a speech act is done in a procedural way (think Prolog) where a goal is achieved by fulfilling is dependent subgoals.




The symbol



stands for a choice. One of these ways may be used to process a procedure.

The goal may just be to look up a fact in a domain database. Or it may involve reasoning or planning. Finally, it may require introspection into its own former goals. Advanced topic, only handled by SHRDLU. I apologize for the extremely sketchyness of this part. I have never programmed "planning", nor the introspective parts.

The take away of this diagram is that any goal may be achieved in many ways that all require intelligence. A goal may have sub goals and these sub goals can also have sub goals. The leaf nodes of such a structure consist of a simple database lookup, or a simple action. An action may update a database, or an actual action in the world (again SHRDLU being a perfect example).

Database query and update

Transforming a logical structure into a database query is an art in itself, but in a simple flowchart it just looks like this:

 

My approach to accessing a database is to not create a grand single query to fetch all information at once. In stead, I think it works better to have many elementary queries, selecting 1 field. No aggregations.



On SQLAlchemy

I've been using SQLAlchemy and reading about it for a few months now, and I don't get it. I don't mean I don't get SQLAlchem...