This should do it:
import math
def sigmoid(x):
return 1 / (1 + math.exp(-x))
And now you can test it by calling:
>>> sigmoid(0.458)
0.61253961344091512
Update: Note that the above was mainly intended as a straight one-to-one translation of the given expression into Python code. It is not tested or known to be a numerically sound implementation. If you know you need a very robust implementation, I’m sure there are others where people have actually given this problem some thought.
It is also available in scipy: http://docs.scipy.org/doc/scipy/reference/generated/scipy.stats.logistic.html
In [1]: from scipy.stats import logistic
In [2]: logistic.cdf(0.458)
Out[2]: 0.61253961344091512
which is only a costly wrapper (because it allows you to scale and translate the logistic function) of another scipy function:
In [3]: from scipy.special import expit
In [4]: expit(0.458)
Out[4]: 0.61253961344091512
If you are concerned about performances continue reading, otherwise just use expit.
Some benchmarking:
In [5]: def sigmoid(x):
….: return 1 / (1 + math.exp(-x))
….:
In [6]: %timeit -r 1 sigmoid(0.458)
1000000 loops, best of 1: 371 ns per loop
In [7]: %timeit -r 1 logistic.cdf(0.458)
10000 loops, best of 1: 72.2 µs per loop
In [8]: %timeit -r 1 expit(0.458)
100000 loops, best of 1: 2.98 µs per loop
As expected logistic.cdf is (much) slower than expit. expit is still slower than the python sigmoid function when called with a single value because it is a universal function written in C ( http://docs.scipy.org/doc/numpy/reference/ufuncs.html ) and thus has a call overhead. This overhead is bigger than the computation speedup of expit given by its compiled nature when called with a single value. But it becomes negligible when it comes to big arrays:
In [9]: import numpy as np
In [10]: x = np.random.random(1000000)
In [11]: def sigmoid_array(x):
….: return 1 / (1 + np.exp(-x))
….:
(You’ll notice the tiny change from math.exp to np.exp (the first one does not support arrays, but is much faster if you have only one value to compute))
In [12]: %timeit -r 1 -n 100 sigmoid_array(x)
100 loops, best of 1: 34.3 ms per loop
In [13]: %timeit -r 1 -n 100 expit(x)
100 loops, best of 1: 31 ms per loop
But when you really need performance, a common practice is to have a precomputed table of the the sigmoid function that hold in RAM, and trade some precision and memory for some speed (for example: http://radimrehurek.com/2013/09/word2vec-in-python-part-two-optimizing/ )
Also, note that expit implementation is numerically stable since version 0.14.0: https://github.com/scipy/scipy/issues/3385