Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions simdb/db.py
Original file line number Diff line number Diff line change
Expand Up @@ -28,6 +28,8 @@
PY2 = sys.version_info.major == 2

try:
from pymor.core.pickle import dumps, load
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would prefer to not let have simdb any special cases for pyMOR. One could think of some mechanism, however, to customize pickling.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I ran across the need to use pyMORs more advanced pickling in simdb quite often in the recent past, so any mechanism in simdb would be appreciated.

except ImportError:
from cPickle import dumps, load
except ImportError:
from pickle import dumps, load
Expand Down
24 changes: 19 additions & 5 deletions simdb/run.py
Original file line number Diff line number Diff line change
Expand Up @@ -30,9 +30,11 @@
import atexit
from collections import defaultdict
try:
from cPickle import dump
from pymor.core.pickle import dump, PicklingError
except ImportError:
from pickle import dump
from cPickle import dump, PicklingError
except ImportError:
from pickle import dump, PicklingError
import datetime
import os
if PY2:
Expand Down Expand Up @@ -288,7 +290,15 @@ def process_data(v):
data = {k: process_data(v) for k, v in _current_dataset_data.items()}

with open(os.path.join(_current_dataset, 'DATA'), 'wb') as f:
dump(data, f, protocol=-1)
try:
dump(data, f, protocol=-1)
except PicklingError:
for kk, vv in data.items():
try:
dump({kk: vv}, f, protocol=-1)
except PicklingError as e:
print(f'could not pickle "{kk}"')
raise e
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This seems helpful. Can you make a separate PR for it?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

see #10


def get_metadata(v):
if isinstance(v, np.ndarray):
Expand Down Expand Up @@ -405,8 +415,7 @@ def dump_failed(filename):
_saved_excepthook, sys.excepthook = sys.excepthook, _excepthook


@atexit.register
def _exit_hook():
def declare_finished():
if _run:
finished = datetime.datetime.now()
if _current_dataset:
Expand All @@ -415,3 +424,8 @@ def _exit_hook():
yaml.dump(finished, f)
with open(os.path.join(_db_path, 'RUNS', _run, 'FINISHED'), 'wt') as f:
yaml.dump(finished, f)


@atexit.register
def _exit_hook():
declare_finished()
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What is the benefit of these changes?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In an interactive (notebook) session, I want to manually declare the run finished to start postprocessing. In particular in a notebook, the run is only declared finished upon closing the notebook, so any code relying on asserting that the current run is finished usually requires a notebook restart, and so on...