Is there a way to maintain microsecond accuracy when converting a Python datetime to a timestamp?

```
>>> import datetime
>>> d1 = datetime.datetime(2013,7,31,9,13,8,829)
>>> import time
>>> d1_ts = time.mktime(d1.timetuple())
>>> d1
datetime.datetime(2013, 7, 31, 9, 13, 8, 829)
>>> d1_ts
1375279988.0
>>> d1.fromtimestamp(d1_ts)
datetime.datetime(2013, 7, 31, 9, 13, 8)
```

I lose that `.829`

on the conversion. This is rather important, because I have start and end timestamps that I need to step through at set intervals (with sub-second steps) to gather data from some sensors.

Eventually it will be used in a function similar to this:

```
from scipy import arange
sample_time = 0.02
for i in arange(d1_ts, d2_ts, sample_time):
# do stuff
```

With a `sample_time`

that small, the mircoseconds are important.

Best answer

```
import datetime as DT
d1 = DT.datetime(2013, 7, 31, 9, 13, 8, 829)
epoch = DT.datetime(1970, 1, 1)
print(d1.timestamp()) # python3
# 1375276388.000829
print((d1 - epoch).total_seconds()) # python2
# 1375261988.000829
```

Also note that if you are using NumPy 1.7 or newer, you could use np.datetime64:

```
In [23]: x = np.datetime64(d1)
In [24]: x.view('<i8')/1e6
Out[24]: 1375261988.000829
In [38]: x.astype('<i8').view('<M8[us]')
Out[38]: numpy.datetime64('2013-07-31T05:13:08.000829-0400')
In [40]: x.astype('<i8').view('<M8[us]') == x
Out[40]: True
```

Sinces the `np.datetime64`

provides an easy way to convert from dates to 8-byte `ints`

, they can be very convenient for doing arithmetic as `ints`

and then converting to dates.