Examples
This page illustrates Dyna with a number of short programs. Their brevity is rather typical. Note that you also need a short C++ driver program to assert the input and extract the output.
You can find more modern examples in the Powerpoint slides associated with our recent papers.
Warning: Only a subset of the Dyna language is implemented in the downloadable prototype, so not all of these programs will currently run. Furthermore, the language has changed a fair bit since this page was written. The examples relating to NLP should work in principle in the prototype, but be warned that much of the code here is untested. For full running examples, try the demos that come with the distribution.
Note: Some of the syntax is being improved in the new version. The syntax reflected here reflects an earlier design.
Area of a rectangle
:- item(item, double, 0). % all variables are double-precision real numbers area = length*width.
For discussion see the "Hello world" example in the tutorial.
- Please add this example to demos directory and place a link here.
Fibonacci sequence
:- item(fib, int, 0). fib(0) = 0. fib(1) = 1. fib(I) = fib(=I-1) + fib(=I-2).
- To get this to work in the alpha implementation, see this workaround. Also, you will have to replace the first two equality statements with assertions (see why) and replace the third
=
withmax=
(see why).
Factorial
:- item(fact, int, 1). :- item(natural, bool, false). fact(N) *= I if natural(I) & I < N. natural(1) |= true. natural(=I+1) |= natural(I).
Alternatively, you could handle the iteration 1,2,...N by writing a
C++ function between
:
:- item(fact, int, 1). :- foreign_axiom(between). fact(N) *= I if between(1,I,N).
- Neither version will work in the alpha implementation. See current limitations and workarounds.
Transitive closure of a graph
:- item(item, bool, false). path(X,Y) |= edge(X,Y). path(X,Z) |= edge(X,Y) & path(Y,Z).
Add this rule to find out which nodes are on cycles:
on_cycle(X) |= path(X,X).
- This should work fine in the alpha implementation.
Shortest path (Dijkstra's algorithm)
In a directed graph, finds the path of minimum total cost from start to end.
This generalized version lets you specify multiple alternative start or end vertices, and to specify an added cost for choosing a particular start or end vertex.
:- item(item, double, inf). pathto(U) min= start_vertex(U). pathto(V) min= pathto(U) + edge(U,V). goal min= pathto(V) + end_vertex(V).
- See
demos/dijkstra
in the distribution for a running implementation.
Algebraic path
In a directed graph, finds the total product weight of all paths from a start to an end vertex.
:- item(item, double, 0). pathto(U) += start_vertex(U). goal += pathto(V) * end_vertex(V). pathto(V) += pathto(U) * edge(U,V).
- This should work fine in the alpha implementation.
Hidden Markov Models
The Viterbi algorithm for decoding an HMM:
:- item(item,double,0). alpha(State,0) max= initprob(State). alpha(State,Time) max= alpha(Oldstate,=Time-1)*moveprob(Oldstate,State) *emitprob(State,Symbol) if input(Symbol,Time). goal max= alpha(State,Time)*stopprob(State) if input_length(Time).
If you replace max=
with +=
, you get the
forward algorithm. Dyna can automatically derive the backward
algorithm, allowing the DynaMITE toolkit to train the
probabilities.
- To get this to work in the alpha implementation, see this workaround regarding the evaluated subterm
=Time-1
. Also, you may have to switch to log-probabilities and use+
instead of*
(see list of allowed semirings; this also avoids underflow).
Edit distance
:- item(item, int, inf). :- item([end1,end2], bool, false). align(0,0) = 0. align(J1,J2) min= align(I1,I2) + word1(W1,I1,J1) + word2(W2,I2,J2) + subst(W1,W2). align(I1,J2) min= align(I1,I2) + word2(W2,I2,J2) + insert(W2). align(J1,I2) min= align(I1,I2) + word1(W1,I1,J1) + delete(W1). goal = align(N1,N2) if end1(N1) & end2(N2).
The DynaMITE toolkit allows you to train the weights of the
subst
, insert
, and delete
axioms, e.g. in a stochastic edit distance framework.
- To get this to work in the alpha implementation, see this workaround.
- See
demos/edit_distance
in the distribution for a running implementation.
Storing and smoothing model parameters
Dyna data structures are very helpful for managing data and model parameters in a C++ program. You can use Dyna terms as structured variable names in C++:
count(trigram("better","than","baubles")) count(trigram(comparative("good"),"than",plural("bauble"))) count(["baubles","than","better]) % n-gram as reversed list
It's easy as pie to tell Dyna to create efficient C++ classes for this:
:- structure(trigram(string,string,string)). :- structure(count(event e)). :- union(event,[unigram,bigram,trigram]). :- item(count, int, 0). % every count term has an associated int value, 0 by default
If x
is such a name, your C++ program can write val[x]
for its
associated value, where val
is a Dyna chart
object:
chart val; trigram t("better","than","baubles"); ++val[count(t)]; // or just ++val[count(trigram("better","than","baubles"))]
- The above should work fine in the alpha implementation. However, the smoothing formulas below won't work yet.
Furthermore, the chart can derive related quantities for you, such as smoothed probabilities and count-counts. Just put inference rules into your Dyna program:
mle_prob(trigram(X,Y,Z)) = count(trigram(X,Y,Z))/count(bigram(X,Y)). % compute a probability mle_prob([Word|Context]) = count([Word|Context])/count(Context). % general version for n-grams
smoothed_prob(trigram(X,Y,Z)) = lambda*mle_prob(trigram(X,Y,Z)) + (1-lambda)*mle_prob(bigram(Y,Z)). % compute a smoothed probability % You can now use DynaMITE to train lambda to optimize some objective function.
count_count(N) += 1 whenever N is count(Something). % for Good-Turing or Katz smoothing count_count(=count(Something)) += 1. % shorthand equivalent for previous line
Now you can write things like
cout << val[smoothed_prob(t)]; % magically defined by one of above rules
The count_counts and smoothed probabilities will be automatically maintained as you change the counts. It's Dyna's job to figure out whether it's more efficient to compute these quantities on demand or to store them, and when it's most efficient to update stored values.
Parsing and synchronous parsing
Dyna is especially appropriate for dynamic programming algorithms, such as parsing.
Context-free parsing (CKY algorithm)
This version computes the inside probabilities of constituents that have nonzero probability. The same program can also parse speech lattices: just provide input where I
, J
, and N
are lattice states instead of string positions.
:- item(item, double, 0). constit(X,I,J) += rewrite(X,W) * word(W,I,J). constit(X,I,K) += rewrite(X,Y,Z) * constit(Y,I,J) * constit(Z,J,K). goal += constit(s,0,N) * end(N).
Here is a version that uses logarithmic probabilities to avoid underflow:
:- item(item, double, -inf). % log(0) is negative infinity constit(X,I,J) log+= rewrite(X,W) + word(W,I,J). constit(X,I,K) log+= rewrite(X,Y,Z) + constit(Y,I,J) + constit(Z,J,K). goal log+= constit(s,0,N) + end(N).
In either case, Dyna will derive the inside-outside algorithm for you
so that DynaMITE can train the weights of the rewrite
axioms (i.e., the grammar axioms) by EM, conjugate gradient,
deterministic annealing, etc. If the rewrite
weights are
not probabilities, it can train a maximum entropy model.
Note: Dyna actually runs something more sophisticated than CKY (probability-guided agenda-based parsing).
- See
demos/cky
for a running version. (Actually, a version that computes the Viterbi approximation. We should add the inside-outside version to thedemos
directory.)
Context-free parsing (Earley's algorithm)
Again, the Dyna version automatically handles probabilities, training, agenda-based parsing, and lattices. Note the use of lists.
:- item(item, double, 0). :- item(need, bool, false). need(s,0) = true. % init need(Nonterm,J) |= (constit(_/[Nonterm|_],_,J) != 0). % left context constit(Nonterm/Needed,I,I) += rewrite(Nonterm,Needed) if need(Nonterm,I). % "predict" constit(Nonterm/Needed,I,K) += constit(Nonterm/[W|Needed],I,J) * word(W,J,K). % "scan" constit(Nonterm/Needed,I,K) += constit(Nonterm/[X|Needed],I,J) * constit(X/[],J,K). % "complete" goal += constit(s/[],0,N) * end(N). % finish
- To get this to work in the alpha implementation, see this workaround. A running version, which also uses a left-corner filter, is in
demos/earley_viterbi
.
Synchronous parsing for machine translation (Wu)
Many synchronous grammar formalisms can be implemented. Here is the simplest, stochastic inversion transduction grammar.
:- item(item, double, 0). constit(X,I1:I2,J1:J2) max= rewrite(X,W1:W2) * word(W1,I1,J1) * word(W2,I2,J2). constit(X,I1:I2,K1:K2) max= rewrite_norm(X,Y,Z) * constit(Y,I1:I2,J1:J2) * constit(Z,J1:J2,K1:K2). constit(X,I1:I2,K1:K2) max= rewrite_swap(X,Y,Z) * constit(Y,I1:J2,J1:K2) * constit(Z,J1:I2,K1:J2). goal max= constit(s,0:0,N1:N2) * end(N1:N2).
This version (using max=
) finds the best synchronous
parse. It can be used to align two known strings, or to translate by
aligning a known string to a lattice specifying all possible
translations (e.g., Failed to parse (Missing texvc executable; please see math/README to configure.): \Sigma^*
). As usual, one can train
the weights.
- A running version is in
demos/itg
. (It sums log-probabilities rather than multiplying probabilities.)
Collins parser (Model 2)
Collins's lexicalized parser can be re-implemented with a relatively small Dyna program. The core model is under 40 lines of code. There is additional code for training and smoothing. We are working on replicating the state-of-the-art results with our Dyna implementation.
Bilexical CFG parsing (Eisner & Satta)
:- item(item, double, 0). :- item([lhalf,rhalf,end], bool, false). constit(A,H,H,H) += rewrite(A:D,D) if word(D,H). left(A/C,I,J,H) += constit(B,I,H2,J) * rewrite(A:D,B:D2,C:D) if lhalf(C,=J+1,H) & word(D,H) & word(D2,H2). right(A\C,H,I,J) += constit(B,I,H2,J) * rewrite(A:D,C:D,B:D2) if rhalf(C,H,=J-1) & word(D,H) & word(D2,H2). lhalf(A,I,H) |= constit(A,I,H,J)!=0. rhalf(A,H,J) |= constit(A,I,H,J)!=0. constit(A,I,H,K) += left(A/C,I,J,H) * constit(C,=J+1,H,K). constit(A,I,H,K) += constit(C,I,H,=J-1) * right(A\C,H,J,K). goal += constit(s,1,_,N) if end(N).
- To get this to work in the alpha implementation, see this workaround and this workaround.
Combinatory categorial grammar (CCG) parsing (Vijay-Shanker & Weir)
- fill this in
Tree-adjoining grammar (TAG) parsing
- fill this in
Neural networks
:- computed(sigmoid_function). % defined in C++ out(Node) = sigmoid_function(in(Node)). % for a neural network in(Node) += weight(Node,Previous)*out(Previous). error += (out(Node)-target(Node))**2 if ?target(Node).
This works not just for feed-forward networks but also for recurrent ones. Dyna will derive the back-propagation algorithm for computing the gradient so that DynaMITE can train the network to minimize error.
- This won't work in the alpha implementation, because we don't allow arbitrary computed expressions yet. See current limitations and workarounds.
Graphical models
An (undirected) graphical model is described as a product of potential functions.
numerator(D,F) += a(A)*b(B,A)*c(C,A)*d(D,A,B)*e(E,B,C)*f(F,E). denominator(D) += numerator(D,F). conditional_prob(D,F) = numerator(D,F)/denominator(D).
For example, conditional_prob(3,8)
is Failed to parse (Missing texvc executable; please see math/README to configure.): p(F=8|D=3)
.
Dyna solves this by variable elimination (binarization of inference rules).
(It does not yet know how to pick a good elimination order, or triangulation, but
searching for one can probably be written as a short Dyna program itself.)
- This should work fine in the alpha version, except for the quotient in the last line.
One can also look for the most probable set of values such that Failed to parse (Missing texvc executable; please see math/README to configure.): D=3
best(D) max= a(A)*b(B,A)*c(C,A)*d(D,A,B)*e(E,B,C)*f(F,E).
Dechter's mini-buckets approximation simply decouples some of the variables (such as Failed to parse (Missing texvc executable; please see math/README to configure.): B ) to get an upper bound:
better_than_best(D) max= a(A)*b(B1,A)*c(C,A)*d(D,A,B1)*e(E,B2,C)*f(F,E).
- This should work fine in the alpha version, except that you may have to switch to log-probabilities and use
+
instead of*
(see list of allowed semirings; this also avoids underflow).
Game-tree analysis
This game-tree analyzer uses minimax to determine the best strategy for player1
to maximize his or her total payoff over the course of the game. Each axiom represents a legal move by one player, and its value specifies the immediate payoff to player1
(perhaps negative) of that move. Note that in a traditional game, only stop
axioms have nonzero payoff.
goal = best(Board) if start(Board). best(Board) max= stop(player1, Board). best(Board) max= move(player1, Board, NewBoard) + worst(NewBoard). worst(Board) min= stop(player2, Board). worst(Board) min= move(player2, Board, NewBoard) + best(NewBoard).
It is also possible to encode time-discounted payoffs. Simply replace
worst(NewBoard)
with 0.9*worst(NewBoard)
in
the code above.
The legal moves can be axioms provided as input to the program, but they could also be theorems, derived from the rules of the game by another part of the program.
Note that Dyna supports A* search (and may support other strategies in future).
- The above code won't work in the alpha implementation, because you can't yet mix
max=
andmin=
in the same program. See current limitations and workarounds.
- Should discuss handling games with randomness, like backgammon and Monopoly.
Partial order maintenance
Suppose you have been told a bunch of ordering facts about some real-valued variables, including both strong and weak inequalities:
told(x<=q). told(p<=x). told(y<=p). told(p<=y). told(y!=q).
and you would like to know, for example, whether it can be deduced that p < q
.
:- item(item, boolean, false). % We will only try to prove "basic" facts about A and B: <= and !=. % The two rules below will derive non-basic facts from these. know(A<B) |= know(A<=B) & know(A!=B). know(A==B) |= know(A<=B) & know(B<=A). % same rule proves know(B==A) % Prove enough "basic" facts to capture everything we've been told. know(A<=B) |= told(A<=B). know(A!=B) |= told(A!=B). know(A<=B) |= told(A==B). know(B<=A) |= told(A==B). know(A<=B) |= told(A<B). know(A!=B) |= told(A<B). % Identity is a freebie. We get it even if we were told nothing. % Not that it helps you prove anything else. know(A<=A) |= true. % Prove more basic facts from all facts we've established so far. know(A<=C) |= know(A<=B) & know(B<=C). know(A!=C) |= know(A<=B) & know(B<C). know(A!=C) |= know(A<B) & know(B<=C). know(A!=C) |= know(A==B) & know(B!=C). know(B!=A) |= know(A!=B). % detect any contradiction contradiction |= know(A!=A).
(This program could be made more efficient -- it derives the same facts in more different ways than necessary.)
This kind of program can be extended to temporal logic, which considers intervals rather than points.
- This program should run in the alpha version if you can live without the identity rule (which contains an illegal free variable on the left-hand side). Except perhaps for the syntactic sugar provided by infix operators. For example, I'm not sure that
A<=B
will be parsed; it may be necessary to write it asA=<B
orleq(A,B)
. It is also possible to declare new infix operators such asA<=B
, either with an:- operator
declaration in the program or by changing the default declarations insrc/dyna_parser/basic_parser.cpp
.
Automata
Dyna is well-suited for simulating formal language automata, and for performing automaton constructions.
- Please add the FST composition program and its more efficient transformed version.
Finite-state recognizer
Composition of finite-state transducers
:- item (item, bool, false). start (A o B, Q x R) |= start (A, Q) & start (B, R). arc (A o B, Q1 x R1, Q2 x R2, In, Out) |= arc (A, Q1, Q2, In, Match) & arc (B, R1, R2, Match, Out). stop (A o B, Q x R) |= stop (A, Q) & stop (B, R).
This is a simple unweighted version of finite-state transducer composition. It has several shortcomings: it composes ad infinitum, it produces arcs to and from non-accessible and non-coaccessible states, and it doesn't handle Failed to parse (Missing texvc executable; please see math/README to configure.): \epsilon -transitions. The first can be handled as follows:
:- item (item, bool, false). start (A o B, Q x R) |= start (A, Q) & start (B, R) if want (A o B). arc (A o B, Q1 x R1, Q2 x R2, In, Out) |= arc (A, Q1, Q2, In, Match) & arc (B, R1, R2, Match, Out) if want (A o B). stop (A o B, Q x R) |= stop (A, Q) & stop (B, R) if want (A o B).
We can trim a transducer to only accessible and coaccessible states as follows:
accessible (A, Q) |= start (A, Q) if want (trim (A)). accessible (A, Q) |= arc (A, P, Q, _In, _Out) if accessible (A, P). coaccessible (A, Q) |= stop (A, Q) if want (trim (A)). coaccessible (A, Q) |= arc (A, Q, R, _In, _Out) if coaccessible (A, R). start (trim (A), Q) |= start (A, Q) if coaccessible (A, Q). arc (trim (A), Q, R, In, Out) |= arc (A, Q, R, In, Out) if accessible (A, Q) & coaccessible (A, R). stop (trim (A), Q) |= stop (A, Q) if accessible (A, Q).
The following program handles Failed to parse (Missing texvc executable; please see math/README to configure.): \epsilon -transitions. See (Mohri, Pereira, and Riley 1996) for an explanation of the filter transducer.
:- item (item, bool, false). start(A o B, Q x R) |= start(A, Q) & start(B, R) if want(A o B). arc(A o B, QS x RS, QT x RT, IN, OUT) |= filter_arc(A o B, QS x RS, QT x RT, IN, OUT, _FILTER) if coaccessible(A o B, QT x RT). stop(A o B, Q x R) |= stop(A, Q) & stop(B, R) if accessible(A o B, Q x R, _FILTER). accessible(A o B, Q x R, 0) |= start(A o B, Q x R). accessible(A o B, Q x R, FILTER) |= filter_arc(A o B, _SOURCE, Q x R, _IN, _OUT, FILTER). coaccessible(A o B, Q x R) |= stop(A o B, Q x R). coaccessible(A o B, QS x RS) |= filter_arc(A o B, QS x RS, QT x RT, _IN, _OUT, _FILTER) if coaccessible(A o B, QT x RT). filter_arc(A o B, QS x RS, QT x RT, IN, OUT, 0) |= arc(A, QS, QT, IN, sigma(MATCH)) & arc(B, RS, RT, sigma(MATCH), OUT) if accessible(A o B, QS x RS, _FILTER). filter_arc(A o B, QS x RS, QT x RT, IN, OUT, 0) |= arc(A, QS, QT, IN, epsilon(EPS)) & arc(B, RS, RT, epsilon(EPS), OUT) if accessible(A o B, QS x RS, 0). filter_arc(A o B, Q x RS, Q x RT, epsilon(EPS), OUT, 1) |= arc(B, RS, RT, epsilon(EPS), OUT) if accessible(A o B, Q x RS, 0). filter_arc(A o B, Q x RS, Q x RT, epsilon(EPS), OUT, 1) |= arc(B, RS, RT, epsilon(EPS), OUT) if accessible(A o B, Q x RS, 1). filter_arc(A o B, QS x R, QT x R, IN, epsilon(EPS), 2) |= arc(A, QS, QT, IN, epsilon(EPS)) if accessible(A o B, QS x R, 0). filter_arc(A o B, QS x R, QT x R, IN, epsilon(EPS), 2) |= arc(A, QS, QT, IN, epsilon(EPS)) if accessible(A o B, QS x R, 2).
This handles most of the trimming. It doesn't trim non-coaccessible start states. It requires that alphabet symbols are asserted with sigma
and Failed to parse (Missing texvc executable; please see math/README to configure.): \epsilon
symbols with epsilon
.
All of these programs can be made to handle weighted finite-state transducers by replacing (item, bool, false)
with (item, double, 0)
and (|=, &
) with (+=, *
).
Turing machine
EM clustering of sparse vectors
:- item([coord,centroid_coord],double,0). :- item([logprob,logprior],double,0). :- item(variance,double,1). :- item(total,double,neg_inf).
logprob(Vect,Cluster) += -(coord(Vect,I)-centroid_coord(Cluster,I))**2 / variance. total(Vect) log+= logprob(Vect,Cluster)*logprior(Cluster). goal += total(Vect).
This defines goal
to be the total log-probability of all
the vectors. One can train the centroid coordinates to increase
goal
.
- This won't work in the alpha implementation, because we don't allow arbitrary computed expressions yet. See current limitations and workarounds.
Static analysis
Many algorithms for static analysis of programs can be elegantly expressed in Dyna. (See McAllester 2000.)
Type declaration inference
The Dyna compiler itself uses a Dyna program for tasks such as inferring type declarations for the user's program.
% If X can be of type T everywhere on the right-hand side of an inference rule, % then we must allow it to be of type T on the left-hand side too. required(Functor:I, Type) |= declaration_includes(Functor:I, Type). required(Functor:I, Type) |= occur(Var:Functor:I, Lhs) & rule(Lhs, Rhs) & requiredvar(Var:Rhs, Type). requiredvar(Var:Expr, Type) &= required(Functor:I, Type) if occur(Var:Functor:I, Expr).
% If X can't be of type T on the left-hand side of an inference rule, % then we can't allow it to be of type T anywhere on the right-hand side either. forbidden(Functor:I, Type) |= declaration_excludes(Functor:I,Type). forbidden(Functor:I, Type) |= occur(Var:Functor:I, Rhs) & rule(Lhs, Rhs) & forbiddenvar(Var:Lhs, Type). forbiddenvar(Var:Expr, Type) |= occur(Var:Functor:I, Expr) & forbidden(Functor:I, Type).
% check for contradictions in the user's type declarations. contradiction |= required(Functor:I,Type) & forbidden(Functor:I,Type). % permit everything that's required. Where nothing was explicitly required, % permit everything that's not explicitly forbidden. permit(Functor:I,Type) |= required(Functor:I,Type). something_required(Functor:I) |= required(Functor:I,Type). permit(Functor:I,Type) |= istype(Type) & !forbidden(Functor:I,Type) & !something_required(Functor:I).
The first part of the program computes strong reachability in a hypergraph, while the second part computes weak reachability in its reversal.
- Actually the correct algorithm is trickier ...
Retraction closure
A similar computation determines which item types must be preserved, and which deleted, when a specified set of axioms is retracted. (The only structural difference from the previous program is that this time, the second hypergraph is not reversed. Encoding the problem is also different: instead of deciding whether vertex Functor:I
should permit a given Type
, we will decide whether vertex Type
should be preserved or not. Var
is dropped.)
% If every antecedent of an inference rule is preserved, % then we must also preserve the consequent. required(Type) |= preserved_axiom(Type). required(Type) |= occur(Type, Lhs) & rule(Lhs, Rhs) & requiredrhs(Rhs). requiredrhs(Rhs) &= required(Type) if occur(Type, Rhs).
% If any antecedent of an inference rule is deleted. % then we must also delete the consequent. forbidden(Type) |= retracted_axiom(Type). forbidden(Type) |= occur(Type, Lhs) & rule(Lhs, Rhs) & forbiddenrhs(Rhs). forbiddenrhs(Rhs) |= occur(Type, Rhs) & forbidden(Type).
% check whether there are types that cannot be either % preserved or retracted in their entirety. contradiction |= required(Type) & forbidden(Type).
- To get this program to work in the alpha implementation, you have to use this workaround. A running implementation is in
src/dyna_parser/retraction_closure
together withsrc/parser_types/parser_types.dyna
.
Dynamic graph algorithms
- Minimum spanning tree algorithms
- Union-find
- Weighted bipartite matching
Database operations
Select, join, etc. on tuples.
Constraint programming
This program defines arc-consistency for constraint programming. We rule out a value of Var if it is not consistent with any surviving value of Var2.
:- item(in_domain, bool, false). % axiom - user may assert some true values :- item(consistent, bool, true). % axiom - user may assert some false values :- item(variable, bool, false). % what are the variables of the system? :- item(possible, bool, false). % what values are arc-consistent for each variable? :- item(support, bool, false). % why are those values arc-consistent?
variable(Var) |= in_domain(Var:Val). possible(Var:Val) &= in_domain(Var:Val). possible(Var:Val) &= support(Var:Val, Var2) whenever variable(Var2). support(Var:Val, Var2) |= consistent(Var:Val, Var2:Val2) & possible(Var2:Val2).
Dyna finds the optimal algorithm (known in the literature as AC-4). (See more discussion of this program.)
- The program will not run in the alpha version because it is not homogeneous. It might be made to run by flipping
consistent
toinconsistent
and using this workaround.
Transformation-based learning
Transformation model
Secretary problem
And other examples of Bellman-style dynamic programming, like Eng & Eisner for predictive text entry.
Word games
- Crossword puzzles
- Cryptograms (thanks to Kevin Knight)
- Anagrams
- Finding words on a Boggle board