Use this URL to cite or link to this record in EThOS: https://ethos.bl.uk/OrderDetails.do?uin=uk.bl.ethos.759876
Title: Learning spatio-temporal spike train encodings with ReSuMe, DelReSuMe, and Reward-modulated Spike-timing Dependent Plasticity in Spiking Neural Networks
Author: Ozturk, Ibrahim
ISNI:       0000 0004 7431 8943
Awarding Body: University of York
Current Institution: University of York
Date of Award: 2017
Availability of Full Text:
Access from EThOS:
Access from Institution:
Abstract:
SNNs are referred to as the third generation of ANNs. Inspired from biological observations and recent advances in neuroscience, proposed methods increase the power of SNNs. Today, the main challenge is to discover efficient plasticity rules for SNNs. Our research aims are to explore/extend computational models of plasticity. We make various achievements using ReSuMe, DelReSuMe, and R-STDP based on the fundamental plasticity of STDP. The information in SNNs is encoded in the patterns of firing activities. For biological plausibility, it is necessary to use multi-spike learning instead of single-spike. Therefore, we focus on encoding inputs/outputs using multiple spikes. ReSuMe is capable of generating desired patterns with multiple spikes. The trained neuron in ReSuMe can fire at desired times in response to spatio-temporal inputs. We propose alternative architecture for ReSuMe dealing with heterogeneous synapses. It is demonstrated that the proposed topology exactly mimic the ReSuMe. A novel extension of ReSuMe, called DelReSuMe, has better accuracy using less iteration by using multi-delay plasticity in addition to weight learning under noiseless and noisy conditions. The proposed heterogeneous topology is also used for DelReSuMe. Another plasticity extension based on STDP takes into account reward to modulate synaptic strength named R-STDP. We use dopamine-inspired STDP in SNNs to demonstrate improvements in mapping spatio-temporal patterns of spike trains with the multi-delay mechanism versus single connection. From the viewpoint of Machine Learning, Reinforcement Learning is outlined through a maze task in order to investigate the mechanisms of reward and eligibility trace which are the fundamental in R-STDP. To develop the approach we implement Temporal-Difference learning and novel knowledge-based RL techniques on the maze task. We develop rule extractions which are combined with RL and wall follower algorithms. We demonstrate the improvements on the exploration efficiency of TD learning for maze navigation tasks.
Supervisor: Halliday, David Sponsor: Not available
Qualification Name: Thesis (Ph.D.) Qualification Level: Doctoral
EThOS ID: uk.bl.ethos.759876  DOI: Not available
Share: