-
Notifications
You must be signed in to change notification settings - Fork 25
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
memory activation and decay #33
Comments
Further consideration suggests that spreading activation should not be performed when adding a chunk to a graph. In respect to the group of items example, if spreading activation was done when adding the chunks, then the oldest items in the group would get higher activation levels than the youngest items, which seems wrong! It therefore seems better to only perform spreading activation when updating or accessing an existing chunk. It should be safer and easier to use getters and setters for chunk properties when it comes to maintaining the index from chunk IDs to chunks with that ID as a property value. A counter argument is that it might be better to use explicit functions as this will make it easier to port the chunks library to programming languages that don't support getters and setters. The index is only updated if the chunk graph property is defined. Some possible complications:
It is likely that additional and more complicated indexes will be needed to support finding chunks that match the chunk type and properties in chunk buffers and rule conditions. Further work is anticipated as part of scaling up chunk database performance for larger databases. |
For memories that once were really active, but haven't been used for a long time, these may still be of value in the right context. The above approach relies on spreading activation to boost old memories, but is that always sufficient? One potential approach would be to also keep track of how many times a given memory has been used. How should this be combined with the decaying model of activation in respect to memory recall? How could we design experiments to test this with human subjects? |
Discussed today during Cogai CG call. Starting point would be to add a |
The current version of the JavaScript library uses
Consider animals that need to adapt their behaviour to the annual seasons. Knowledge needed for each season lies dormant for much of the year. The decay rate for chunk strength is inverse proportional to chunk usage. This balances the needs for short and long term memories, and reflects underlying changes in synapses due to neurotransmitters and neural plasticity. |
Unlike conventional database systems, cognitive agents just want to recall what is most important based upon experience. This is similar to web search engines which seek to provide the results that are likely to be most relevant given the words in the search query.
Underwood (1957) showed that memory loss is largely attributable to interference with other memories. Memories can thus be recalled after an interval of many years provided that the interference is small. This reflects experience in selecting memories that have been more valuable.
For ACT-R, the decay of activation is only one component of the activation equation. There is also a context component to activation which works to increase the activation of items based on the current context. Thus, even chunks which have decayed significantly over time can have activations above the threshold if they are strongly related to the current context.
Proposed approach for the chunks specification
Spreading Activation
Here is an example:
One implementation strategy is to have one index mapping from chunk IDs to chunks, and another index from chunk IDs to the set of chunk IDs for chunks that have the given ID as a property value. A further index maps chunk types to the set of IDs for chunks with that type. This requires care to ensure that the indexes are kept up to date in respect to adding and removing chunks from a graph, as well as when the chunk type or chunk properties are updated.
Here is an implementation in JavaScript:
Chunk recall first identifies matching chunks and for each match, applies gaussian noise to the chunk's activation level, and selects the matching chunk with the highest resulting score. The selected chunk is activated as above. Selection fails if the score is below a given threshold.
The gaussian distribution is centred around zero and drops off for negative and positive numbers. The graph.gaussian function above on average returns values close to zero, and more rarely large negative or positive numbers.
To apply gaussian noise to an activation level, multiply the level by e raised to the power of the noise value computed from graph.gaussian. The standard deviation should be a system wide constant.
For the memory test task, the successfully recalled items in the test are treated as an iteration (see
@do next
). Rules then have access to the number of items recalled as well as to the sequence of items. Items may failed to be recalled if their activation level is low, or if the stochastic noise depresses the score below the threshold.Summary
Human memory is functionally modelled in terms of a graph of chunks where each chunk is associated with an activation level and a timestamp. Activation decays exponentially with time (like a leaky capacitor), but is boosted by recall or update, and via spreading activation in both directions through links between chunks. Recall is stochastic with noise being applied to the chunk activation level before comparison with a cut-off threshold.
The text was updated successfully, but these errors were encountered: