diff --git a/.cvskeywords b/.cvskeywords index 90736f2241..318ccc5823 100644 --- a/.cvskeywords +++ b/.cvskeywords @@ -1,10 +1,10 @@ -./library/win32/win32-io-internals.factor:! $Id: win32-io-internals.factor,v 1.8 2005/07/23 09:30:17 eiz Exp $ +./library/win32/win32-io-internals.factor:! $Id: win32-io-internals.factor,v 1.11 2005/09/03 18:48:25 spestov Exp $ ./library/win32/win32-io.factor:! $Id: win32-io.factor,v 1.4 2005/07/23 06:11:07 eiz Exp $ -./library/win32/win32-stream.factor:! $Id: win32-stream.factor,v 1.8 2005/07/23 06:11:07 eiz Exp $ -./library/win32/win32-errors.factor:! $Id: win32-errors.factor,v 1.7 2005/06/09 02:32:45 eiz Exp $ -./library/win32/win32-server.factor:! $Id: win32-server.factor,v 1.7 2005/07/23 06:11:07 eiz Exp $ -./library/win32/winsock.factor:! $Id: winsock.factor,v 1.5 2005/06/13 21:04:58 spestov Exp $ -./library/bootstrap/win32-io.factor:! $Id: win32-io.factor,v 1.8 2005/06/19 21:50:33 spestov Exp $ +./library/win32/win32-stream.factor:! $Id: win32-stream.factor,v 1.10 2005/09/03 18:48:25 spestov Exp $ +./library/win32/win32-errors.factor:! $Id: win32-errors.factor,v 1.8 2005/09/03 18:48:25 spestov Exp $ +./library/win32/win32-server.factor:! $Id: win32-server.factor,v 1.11 2005/09/03 18:48:25 spestov Exp $ +./library/win32/winsock.factor:! $Id: winsock.factor,v 1.6 2005/08/31 22:42:52 eiz Exp $ +./library/bootstrap/win32-io.factor:! $Id: win32-io.factor,v 1.9 2005/08/31 05:39:36 eiz Exp $ ./factor/ExternalFactor.java: * $Id: ExternalFactor.java,v 1.27 2005/07/17 20:29:04 spestov Exp $ ./factor/ConstructorArtifact.java: * $Id: ConstructorArtifact.java,v 1.1 2005/03/01 23:55:59 spestov Exp $ ./factor/math/Complex.java: * $Id: Complex.java,v 1.1.1.1 2004/07/16 06:26:13 spestov Exp $ diff --git a/CHANGES.html b/CHANGES.html index ff7f199285..5ab313438f 100644 --- a/CHANGES.html +++ b/CHANGES.html @@ -1,9 +1,99 @@ + +
foldable
, eg:
+: cube dup dup * * ; foldable
1 *
are optimized out.
+2each ( seq seq quot -- quot: elt -- elt )
combinatorjoin ( seq glue -- seq )
word. Takes a sequence of sequences, and constructs a new sequence with the glue in between each sequence. For example:
+[ "usr" "bin" "grep" ] "/" join +"usr/bin/grep"
count ( n -- [ 0 ... n-1 ] )
word is gone; just use >vector
instead. Also, project
has been made redundant by map
.seq-transpose ( seq -- seq )
word is now named flip
.
+math
vocabulary.make-hash ( quot -- namespace )
combinator executes quotation in a new namespace, which is then pushed on the stack.<namespace>
word is gone. It would create a hashtable with a default capacity. Now, just write {{ }} clone
.+make-list ==> [ ] make +make-vector ==> { } make +make-string ==> "" make +make-rstring ==> "" make reverse +make-sbuf ==> SBUF" " make +
every?
word has been replaced with monotonic? ( seq quot -- ? )
. Its behavior is a superset of every?
-- it now accepts any transitive relation, and checks if the sequence is monotonic under this relation. For example,
+[ = ] monotonic?
checks if all elements in a sequence are equal, and [ < ] monotonic?
checks for a strictly increasing sequence of integers.unparse ( object -- string )
word has been moved to the prettyprint
vocabulary, and can now produce a parsable string for any class supported by the prettyprinter.unparse-short ( object -- string )
returns a string no longer than a single line.profile ( word -- )
word. Causes the word's accumulative runtime to be stored in a global variable named by the word. This is done with the annotation facility, the word's definition is modified; use reload ( word -- )
to get the old definition back from the source file.sleep ( ms -- )
word pauses current thread for a number of milliseconds.with-datastack ( stack word -- stack )
combinator.cond ( conditions -- )
combinator. It behaves like a set of nested ifte
s, and compiles if each branch has the same stack effect. See its documentation comment for details.G:
syntax) in handbook.
+contrib/concurrency
(Chris Double).contrib/algebra/
. Now, vector operations are possible
+and the syntax doesn't use so many spaces. New way to write the quadratic formula:
+MATH: quadratic[a;b;c] = + plusmin[(-b)/2*a;(sqrt(b^2)-4*a*c)/2*a] ;+(Daniel Ehrenberg)
contrib/crypto/
(Doug Coleman).
+
The concurrency library here is based upon the style + of concurrency used in systems like Erlang and Termite. It is + currently at a very early stage and only supports concurrent + processes within a single Factor image. The interface is very likely to + change so it is quite experimental at this stage. The ability to + have distributed processes is planned.
+A concurrency oriented program is one in which multiple processes +run simultaneously in a single Factor image. The processes can +communicate with each other by asynchronous message sends. Although +processes can share data via Factor's mutable data structures it is +not recommended as the use of shared state concurrency is often a +cause of problems.
+The quickest way to get up and running with this library is to +change to the 'concurrency' directory and run Factor. Then execute the +following commands:
++"load.factor" run-file +USE: concurrency +USE: concurrency-examples ++
A process is basically a thread with a message queue. Other +processes can place items on this queue by sending the process a +message. A process can check its queue for messages, blocking if none +are pending, and process them as they are queued.
+Factor processes are very lightweight. Each process can take as +little as 900 bytes of memory. This library has been tested running +hundreds of thousands of simple processes.
+The messages that are sent from process to process are any Factor +value. Factor tuples are ideal for this sort of thing as you can send +a tuple to a process and the predicate dispatch mechanism can be used +to perform actions depending on what the type of the tuple is.
+Processes are usually created using the 'spawn' word:
++IN: concurrency +spawn ( quot -- process ) ++
This word takes a quotation on the stack and starts a process that +will execute that quotation asynchronously. When the quotation +completes the process will die. 'spawn' leaves on the stack the +process object that was started. This object can be used to send +messages to the process using the 'send' word:
++IN: concurrency +send ( message process -- ) ++
'send' will return immediately after placing the message in the +target processes message queue. A process can get a message from its +queue using the 'receive' word:
++IN: concurrency +receive ( -- message ) ++
This will get the most recent message +and leave it on the stack. If there are no messages in the queue the +process will 'block' until a message is available. When a process is +blocked it takes no CPU time at all.
++[ receive print ] spawn +"Hello Process!" swap send ++
This example spawns a process that first blocks, waiting to receive +a message. When a message is received, the 'receive' call returns +leaving it on the stack. It then prints the message and exits. 'spawn' +left the process on the stack so it's available to send the 'Hello +Process!' message to it. Immediately after the 'send' you should see +'Hello Process!' printed on the console.
+It is also possible to selectively retrieve messages from the +message queue. The 'receive-if' word takes a predicate quotation on the stack +and returns the first message in the queue that satisfies the +predicate. If no items satisfy the predicate then the process is +blocked until a message is received that does. +
++: odd? ( n -- ? ) + 2 mod 1 = ; + +1 self send +2 self send +3 self send + +receive . + => 1 +[ odd? ] receive-if . + => 3 +receive . + => 2 ++
A process can get access to its own process object using the 'self' +word so it can pass it to other processes. This allows the other processes to send +messages back. A simple example of using this gets the current +processes 'self' and spawns a process which sends a message to it. We +then receive the message from the original process
++ self .s + => << process ... >> + [ "Hello!" swap send ] cons spawn drop receive . + => "Hello" ++
A common idiom is to create 'server' processes that act on messages +that are sent to it. These follow a basic pattern of blocking until a +message is received, processing that message then looping back to +blocking for a message.
+The following example shows a very simple server that expects a +cons cell as its message. The 'car' of the cons should be the senders +process object. If the 'cdr' is 'ping' then the server sends 'pong' +back to the caller. If the 'cdr' is anything else then the server +exits:
++: (pong-server0) ( -- ) + receive uncons "ping" = [ + "pong" swap send (pong-server0) + ] [ + "Pong server shutting down" swap send + ] ifte ; + +: pong-server0 ( -- process) + [ (pong-server0) ] spawn ; + + pong-server0 + self "ping" cons over send receive . + => "pong" + self "ping" cons over send receive . + => "pong" + self "shutdown" cons over send receive . + => "Pong server shutting down" ++
Handling the deconstructing of messages and dispatching based on +the message can be a bit of a chore. Especially in servers that take a +number of different messages. One approach to factor this code out, +and reduce the amount of stack juggling required, is to use tuples as +messages. This allows using the generic dispatch mechanism. The +following example implements the pong server but using tuples as +messages:
++TUPLE: ping-message from ; +TUPLE: shutdown-message from ; + +GENERIC: handle-message + +M: ping-message handle-message ( message -- bool ) + ping-message-from "pong" swap send t ; + +M: shutdown-message handle-message ( message -- bool ) + shutdown-message-from "Pong server shutdown commenced" swap send f ; + +: (pong-server1) ( -- ) + "pong-server1 waiting for message..." print + receive handle-message [ (pong-server1) ] when ; + +: pong-server1 ( -- process ) + [ + (pong-server1) + "pong-server1 exiting..." print + ] spawn ; ++
Two tuples are created for a 'ping' and 'shutdown' message. Each +has a 'from' slot which holds the process of the sender. The server +loop, in '(pong-server1)', calls a generic method called +'handle-message'. This has signature ( message -- bool ). These +methods return a boolean. +True means continue the server +loop. False means exit and shut down the server.
+Two methods are added to the generic word. One for 'ping' and the +other for 'pong'. Here's a sample run:
+clear + pong-server1 +=> pong-server1 waiting for message... + self <ping-message> over send receive . +=> "pong" +pong-server1 waiting for message... + self <ping-message> over send receive . +=> "pong" +pong-server1 waiting for message... + self <shutdown-message> over send receive . +=> "Pong server shutdown commenced" + pong-server1 exiting... ++
The advantage of this approach is it is easy to extend the server +without shutting it down. Adding a new message is as simple as +defining the tuple and adding a method to 'handle-message' specialised +on that tuple. Here's an example of adding an 'echo' message, without +shutting the server down:
++ pong-server1 +=> pong-server1 waiting for message... + self <ping-message> over send receive . +=> "pong" + +TUPLE: echo-message from text ; + +M: echo-message handle-message ( message -- bool ) + dup echo-message-text swap echo-message-from send t ; + + self "Hello World" <echo-message> over send receive . +=>"Hello World" + ++
The 'send' word sends a message asynchronously, and the sending +process continues immediately. The 'pong server' examples shown +previously all sent messages to the server and waited for a reply back +from the server. This pattern of synchronous sending is made easier +with the 'send-synchronous' word:
++IN: concurrency +send-synchronous ( message process -- reply ) ++
This word will send a message to the given process and immediately +block until a reply is received for this particular message send. It +leaves the reply on the stack. Note that it doesn't wait for just any +reply, it waits for a reply specifically to this send.
+To do this it wraps the requested message inside a 'tagged-message' +tuple. This tuple is defined as:
++TUPLE: tagged-message data from tag ; ++
When 'send-synchronous' is called it will created a +'tagged-message', storing the current process in the 'from' slot. This +is what the receiving server will use to send the reply to. It also +generates a random 'tag' which is stored in the 'tag' slot. The +receiving server will include this value in its reply. After the send +the current process will block waiting for a reply that has the exact +same tag. In this way you can be sure that the reply you got was for +the specific message sent.
+Here is the 'pong server' recoded to use 'send-synchronous' and the +tagged-message type:
++GENERIC: handle-message2 +PREDICATE: tagged-message ping-message2 ( obj -- ? ) + tagged-message-data "ping" = ; +PREDICATE: tagged-message shutdown-message2 ( obj -- ? ) + tagged-message-data "shutdown" = ; + +M: ping-message2 handle-message2 ( message -- bool ) + "pong" reply t ; + +M: shutdown-message2 handle-message2 ( message -- bool ) + "Pong server shutdown commenced" reply f ; + +: (pong-server2) ( -- ) + "pong-server2 waiting for message..." print + receive handle-message2 [ (pong-server2) ] when ; + +: pong-server2 ( -- process ) + [ + (pong-server2) + "pong-server2 exiting..." print + ] spawn ; + + pong-server2 +=> pong-server2 waiting for message... + "ping" over send-synchronous . +=> "pong" + pong-server2 waiting for message... + "ping" over send-synchronous . +=> "pong" + pong-server2 waiting for message... + "shutdown" over send-synchronous . +=> "Pong server shutdown commenced" + pong-server2 exiting... ++
The main difference in this example is that the 'handle-message2' +methods are dispatched over predicate types. Two predicate types are +set up both based on the 'tagged-message' tuple mentioned earlier. The +first is for 'ping-message2' which is a tagged message where the +message data is the string "ping". The second is also a tagged message +but the message data is the string "shutdown".
+The implementation of the methods uses the 'reply' word. 'reply' +takes a received tagged-message and a new message on the stack and replies to +it. This means that it sends a reply back to the calling process using +the same 'tag' +as the original message. It is a convenience word so you don't have to +manually unpack the tagged-message tuple to get at the originating +process and tag. Its signature is:
++IN: concurrency +reply ( tagged-message message -- ) ++
You'll probably have noticed that the general pattern of the pong +server examples are the same. In a loop they receive a message, +process it using a generic function, and either exit or go back to the +beginning of the loop. This is abstracted in the 'spawn-server' +word:
++IN: quotation +spawn-server ( quot -- process ) ++
This takes a quotation that has stack effect ( message -- bool ). +'spawn-server' will spawn a server loop that waits for a message. When +it is received the quotation is called on it. If the quotation returns +false then the server process exits, otherwise it loops from the +beginning again. Using this word you can write the previous +'pong-server2' example as:
++GENERIC: handle-message2 +PREDICATE: tagged-message ping-message2 ( obj -- ? ) tagged-message-data "ping" = ; +PREDICATE: tagged-message shutdown-message2 ( obj -- ? ) tagged-message-data "shutdown" = ; + +M: ping-message2 handle-message2 ( message -- bool ) + "pong" reply t ; + +M: shutdown-message2 handle-message2 ( message -- bool ) + "Pong server shutdown commenced" reply f ; + +: pong-server3 ( -- process ) + [ handle-message2 ] spawn-server ; ++
The main change is that you no longer need the helper +(pong-server2) word.
+A process can handle exceptions using the standard Factor exception +handling mechanism. If an exception is uncaught the process will +terminate. For example:
+
+[
+ 1 0 /
+ "This will not print" print
+] spawn
+ =>
+Division by zero
+:s :r show stacks at time of error.
+:get ( var -- value ) inspects the error namestack.
+
+Processes can be linked so that a parent process can receive the +exception that caused the child process to terminate. In this way +'supervisor' processes can be created that are notified when child +processes terminate and possibly restart them.
+The easiest way to form this link is using the 'spawn-link' +word. This will create a unidirectional link, such that if an +uncaught exception causes the child to terminate, the parent process +can catch it:
+
+[
+ [
+ 1 0 /
+ "This will not print" print
+ ] spawn-link drop
+ receive
+] [
+ [ "Exception caught." print ] when
+] catch
+ => "Exception caught."
+
+Exceptions are only raised in the parent when the parent does a +'receive' or 'receive-if'. This is because the exception is sent from +the child to the parent as a message.
+To demonstrate how a 'supervisor' process could be created we'll +use the following example 'rpc-server'. It processes 'add', 'product' +and 'crash' messages. 'crash' causes a deliberate divide by zero error +to terminate the process:
++GENERIC: handle-rpc-message +GENERIC: run-rpc-command + +TUPLE: rpc-command op args ; +PREDICATE: rpc-command add-command ( msg -- bool ) + rpc-command-op "add" = ; +PREDICATE: rpc-command product-command ( msg -- bool ) + rpc-command-op "product" = ; +PREDICATE: rpc-command shutdown-command ( msg -- bool ) + rpc-command-op "shutdown" = ; +PREDICATE: rpc-command crash-command ( msg -- bool ) + rpc-command-op "crash" = ; + +M: tagged-message handle-rpc-message ( message -- bool ) + dup tagged-message-data run-rpc-command -rot reply not ; + +M: add-command run-rpc-command ( command -- shutdown? result ) + rpc-command-args sum f ; + +M: product-command run-rpc-command ( command -- shutdown? result ) + rpc-command-args product f ; + +M: shutdown-command run-rpc-command ( command -- shutdown? result ) + drop t t ; + +M: crash-command run-rpc-command ( command -- shutdown? result ) + drop 1 0 / f ; + +: fragile-rpc-server ( -- process ) + [ handle-rpc-message ] spawn-server ; + +: test-add ( process -- ) + [ + "add" [ 1 2 3 ] <rpc-command> swap send-synchronous . + ] cons spawn drop ; + +: test-crash ( process -- ) + [ + "crash" f <rpc-command> swap send-synchronous . + ] cons spawn drop ; ++
An example of use:
++ fragile-rpc-server +=> Waiting for message in server: G:13037 + dup test-add +=> 6 + Waiting for message in server: G:13037 + dup test-crash +=> Division by zero + :s :r show stacks at time of error. + :get ( var -- value ) inspects the error namestack. + dup test-add ++
After the crash, all other messages are ignored by the server as it +is no longer running. The following is a way to re-use this code by +running a 'supervisor' process that links with the 'worker' rpc-server. When +the worker crashes the supervisor process restarts it. All +messages sent to the supervisor are immediately forwarded to the +worker:
++: (robust-rpc-server) ( worker -- ) + [ + #! Forward all messages to worker + receive over send + ] [ + [ + "Worker died, Starting a new worker" print + drop [ handle-rpc-message ] spawn-linked-server + ] when + ] catch + (robust-rpc-server) ; + +: robust-rpc-server ( -- process ) + [ + [ handle-rpc-message ] spawn-linked-server + (robust-rpc-server) + ] spawn ; ++
This time when the 'robust-rpc-server' is run you'll notice that +messages after the crash are still processed:
++ robust-rpc-server +=> Waiting for message in server: G:13045 + dup test-add +=> 6 + Waiting for message in server: G:13045 + dup test-crash +=> Worker died, Starting a new worker + Waiting for message in server: G:13050 + dup test-add +=> 6 + Waiting for message in server: G:13050 ++ +
A future is a placeholder for the result of a computation that is +being calculated in a process. When the process has completed the +computation the future can be queried to find out the result. If the +computation has not completed when the future is queried them the +process will block until the result is completed.
+A future is created using the 'future' word:
++IN: concurrency +future ( quot -- future ) ++
The quotation will be run in a spawned process, and a future object +is immediately returned. This future object can be resolved using the +word '?future':
++IN: concurrency +?future ( future -- result ) ++
Futures are useful for starting calculations that take a long time +to run but aren't needed to later in the process. When the process +needs the value it can use '?future' to get the result or block until +the result is available. For example:
++ [ 30 fib ] future + ...do stuff... + ?future ++
A promise is similar to a future but it is not produced by +calcuating something in the background. It represents a promise to +provide a value sometime later. A process can request the value of a +promise and will block if the promise is not fulfilled. Later, another +process can fulfill the promise, providing a value. All threads +waiting on the promise will then resume with that value on the +stack.
+The words that operate on promises are:
++IN: concurrency +<promise> ( -- promise ) +fulfill ( value promise -- ) +?promise ( promise -- result ) ++
A simple example of use is:
+
+ <promise>
+ [ ?promise "Promise fulfilled: " write print ] spawn drop
+ [ ?promise "Promise fulfilled: " write print ] spawn drop
+ [ ?promise "Promise fulfilled: " write print ] spawn drop
+ "hello" swap fulfill
+ => Promise fulfilled: hello
+ Promise fulfilled: hello
+ Promise fulfilled: hello
+
+In this example a promise is created and three processes spawned, +waiting for that promise to be fulfilled. The main process then +fulfills that promise with the value "hello" and all the blocking +processes resume, printing the value.
+In the Alice programming system it's possible to display futures +and promises in the inspector and the values will automatically change +then the future is ready, or the promise fulfilled. It's possible to +do similar things with the Factor GUI but there is nothing currently +built-in. A simple example of how this might work is included in the +concurrency-examples vocabulary, with the 'test-promise-ui' word.
++: test-promise-ui ( -- ) + <promise> dup <promised-label> gadget. + [ 12 fib unparse swap fulfill ] cons spawn drop ; ++
This creates a 'promised-label' gadget. This is a gadget, also +implemented in the examples, that has an attached promise. The gadget will display the text 'Unfulfilled +Promise' while the promise is unfulfilled. When it is fulfilled the +gadget will immediately redisplay the value of the promise (which will +need to be a printable value for this example).
+The example above displays the gadget using 'gadget.' and then +spawns a thread to compute the 12th fibonacci number and fulfill the +promise with it converted to a string. As soon as the fulfill occurs +the gadget redisplays with the new value.
+So running 'test-promise-ui' will displays 'Unfulfilled Promise' +and a short time later change to the new computed value. You will need +to have the Factor GUI listener for this to work:
++USE: shells +[ ui ] in-thread ++ +
Copyright (c) 2004, Chris Double. All Rights Reserved.
+ diff --git a/contrib/concurrency/load.factor b/contrib/concurrency/load.factor new file mode 100644 index 0000000000..1ba87047a2 --- /dev/null +++ b/contrib/concurrency/load.factor @@ -0,0 +1,17 @@ +USE: kernel +USE: httpd +USE: threads +USE: prettyprint +USE: errors +USE: io + +USE: parser + +: a "../dlists.factor" run-file + "concurrency.factor" run-file ; +: b "concurrency-examples.factor" run-file ; +: c "concurrency-tests.factor" run-file ; +a +b +USE: concurrency +USE: concurreny-examples \ No newline at end of file diff --git a/contrib/concurrency/style.css b/contrib/concurrency/style.css new file mode 100644 index 0000000000..207afdceb7 --- /dev/null +++ b/contrib/concurrency/style.css @@ -0,0 +1,28 @@ + body { background: white; color: black; } + p { margin-left: 10%; margin-right: 10%; + font: normal 100% Verdana, Arial, Helvetica; } + td { margin-left: 10%; margin-right: 10%; + font: normal 100% Verdana, Arial, Helvetica; } + table { margin-left: 10%; margin-right: 10%; } + ul { margin-left: 10%; margin-right: 10%; + font: normal 100% Verdana, Arial, Helvetica; } + ol { margin-left: 10%; margin-right: 10%; + font: normal 100% Verdana, Arial, Helvetica; } + h1 { text-align: center; margin-bottom: 0; margin-top: 1em; } + h2 { margin: 0 5% 0 7.5%; font-size: 120%; font-style: italic; } + h3 { border: 2px solid blue; border-width: 2px 0.5em 2px 0.5em; + padding: 0.2em 0.2em 0.2em 0.5em; background: #fafafa; + margin-left: 10%; margin-right: 10%; margin-top: 2em; + font-size: 100%; } + .note { border: 2px solid blue; border-width: 2px 2px 2px 2em; + padding: 0.5em 0.5em 0.5em 1em; background: #ffe; } + .code { border: 1px solid black; border-width: 1px; + padding: 0.5em; background: #ffe; + margin-left: 10%; margin-right: 10%; } + blockquote { margin-left: 25%; margin-right: 25%; + font-style: italic; } + .highlite { color: red; } + .footer { margin-top: 2.5em; border-top: 1px solid gray; color: + #AAA; font-size: 85%; padding-top: 0.33em; } + #copyright { text-align: center; color: #AAA; + font-size: 65%; } \ No newline at end of file diff --git a/contrib/cont-responder/cont-numbers-game.factor b/contrib/cont-responder/cont-numbers-game.factor index ddde1ec541..62e2474d25 100644 --- a/contrib/cont-responder/cont-numbers-game.factor +++ b/contrib/cont-responder/cont-numbers-game.factor @@ -69,7 +69,7 @@ USE: namespaces