Base functionality for entropy calculations common to all systems
Methods
I | |
Ish | |
Ishush | |
Ispike | |
calculate_entropies | |
pola_decomp |
Convenience function to compute mutual information
Must have already computed required entropies [‘HX’, ‘HXY’]
Parameters : |
|
---|
Convenience function to compute shuffled mutual information estimate
Must have already computed required entropies [‘HX’, ‘HiXY’, ‘HshXY’, ‘HXY’]
Parameters : |
|
---|
Convenience function to compute full shuffled mutual information estimate
Must have already computed required entropies [‘HX’, ‘SiHXi’, ‘HshX’, ‘HiXY’, ‘HshXY’, ‘HXY’]
Parameters : |
|
---|
Adelman (2003) style information per spike
Calculate entropies of the system.
Parameters : |
|
---|---|
Keywords : |
|
Returns : |
|
Notes
References
[R1] | (1, 2) T. Schurmann and P. Grassberger, “Entropy estimation of symbol sequences,” Chaos,vol. 6, no. 3, pp. 414–427, 1996. |
[R2] | (1, 2) R. Krichevsky and V. Trofimov, “The performance of universal encoding,” IEEE Trans. Information Theory, vol. 27, no. 2, pp. 199–207, Mar. 1981. |
Convenience function for Pola breakdown
Bases: pyentropy.systems.BaseSystem
Class to hold probabilities and calculate entropies of a discrete stochastic system.
Attributes : |
|
---|
Methods
I | |
Ish | |
Ishush | |
Ispike | |
calculate_entropies | |
pola_decomp |
Check and assign inputs.
Parameters : |
|
---|
Bases: pyentropy.systems.DiscreteSystem
Class to hold probabilities and calculate entropies of a discrete stochastic system when the inputs are available already sorted by stimulus.
Attributes : |
|
---|
Methods
I | |
Ish | |
Ishush | |
Ispike | |
calculate_entropies | |
pola_decomp |
Check and assign inputs.
Parameters : |
|
---|
Convert base-b words to decimal values.
Parameters : |
|
---|---|
Returns : |
|
Note, this is the same as decimalise except input x is ordered differently (here x[t,n] - ie columns are trials).
Convert decimal value to a row of values representing it in a given base.
Parameters : |
|
---|---|
Returns : | y : (t, digits) |
Convert base-b words to decimal values
Parameters : |
|
---|---|
Returns : |
|
Calculate NSB entropy of a probability distribution using external nsb-entropy program.
Required nsb-entropy installed on system path.
Parameters : |
|
---|
Sample probability of integer sequence.
Parameters : |
|
---|---|
Returns : |
|
Quantise 1D input vector into m levels (unsigned)
Parameters : |
|
---|
Re-bin an already discretised sequence (eg of integer counts)
Input should already be non-negative integers
Module for computing finite-alphabet maximum entropy solutions using a coordinate transform method
For details of the method see:
Ince, R. A. A., Petersen, R. S., Swan, D. C., Panzeri, S., 2009 “Python for Information Theoretic Analysis of Neural Data”, Frontiers in Neuroinformatics 3:4 doi:10.3389/neuro.11.004.2009 http://www.frontiersin.org/neuroinformatics/paper/10.3389/neuro.11/004.2009/
If you use this code in a published work, please cite the above paper.
The generated transformation matrices for a given set of parameters are stored to disk. The default location for the cache is a .pyentropy (_pyentropy on windows) directory in the users home directory. To override this and use a custom location (for example to share the folder between users) you can put a configuration file .pyentropy.cfg (pyentropy.cfg on windows) file in the home directory with the following format:
[maxent]
cache_dir = /path/to/cache
pyentropy.maxent.get_config_file() will show where it is looking for the config file.
The probability vectors for a finite-alphabet space of n variables with m possible values is a length m**n-1 vector ordered such that the value of the index is equal to the decimal value of the input state represented, when interpreted as a base m, length n word. eg for n=3,m=3:
P[0] = P(0,0,0)
P[1] = P(0,0,1)
P[2] = P(0,0,2)
P[3] = P(0,1,0)
P[4] = P(0,1,1) etc.
This allows efficient vectorised conversion between probability index and response word using base2dec, dec2base. The output is in the same format.
A class for computing maximum-entropy solutions.
When the class is initiliased the coordinate transform matrices are loaded from disk, if available, or generated.
See module docstring for location of cache directory.
An instance then exposes a solve method which returns the maximum entropy distribution preserving marginal constraints of the input probability vector up to a given order k.
This class computed the full transformation matrix and so can compute solutions for any order.
Methods
eta_from_p | |
p_from_theta | |
solve | |
theta_from_p |
Setup transformation matrix for given parameter set.
If existing matrix file is found, load the (sparse) transformation matrix A, otherwise generate it.
Parameters : |
|
---|
Find maxent distribution for a given order k
Parameters : |
|
---|---|
Returns : |
|
Return theta vector from full probaility vector
Return eta-vector (marginals) from full probability vector
Return full fdim p-vector from fdim-1 length theta
Get the location and name of the config file for specifying the data cache dir. You can call this to find out where to put your config.
Get the data cache dir to use to load and save precomputed matrices
Calculate entropy using C NSB implementation from Spike Train Analysis Toolkit.
Parameters : |
|
---|---|
Returns : |
|
Calculate entropy using BUB implementation from Spike Train Analysis Toolkit.
Parameters : |
|
---|---|
Returns : |
|