🪴 jaden lorenc

Search

Search IconIcon to open search

I Built a Hebbian RNN next steps

Last updated Feb 27, 2024 Edit Source

#hebbian
2024-02-09 08:06

Four things to work on next:

# new rules

Does Ojas Rule work? I might even be able to generalize, probably not this next step, though. I may want to combine the general equation with a metalearning approach, like in meta-learning-through-hebbian-plasticity. List of Hebbian Weight Update Rules

That’s not a simple approach. It would be simpler for me to just swap out rules like:

I also had an interesting idea where I replace my next-character prediction reward signal with one that was layer-wise. It’s based on HPCA, but time-shifted: HPCA

# better dataset

I’m currently using project gutenberg, with 100-character strings chosen randomly. It’s one-hot encoded, fed in one character at a time. I’m not sure whether it’s a bottleneck in the learning process, for a couple of reasons:

pytorch has built-in text datasets: torchtext.datasets — Torchtext 0.17.0 documentation Notably, the unsupervized ones look interesting:

# university compute

I have to find my BYU id, contact Dr. Fulda, contact the office of research computing, use a vpn, it’s a whole hassle. I’ll want my code to be pretty organized and portable and optimized, so I guess I’m biased toward doing all the other things first. However, it’s possible that my wishes for better understanding of the learning dynamics are just a 2-day supercomputer run away. I could literally be wasting time right now because my existing code just works.

# wipe and restart

Other papers already have hebbian learning running, I may just need to slap on my reward signal and rnn loop and get it going. It could be good to see how a new structure might prove superior.