MXNet Elastic Inference with Python - Amazon Elastic Inference

MXNet Elastic Inference with Python

The Amazon Elastic Inference (Elastic Inference) enabled version of Apache MXNet lets you use Elastic Inference seamlessly, with few changes to your Apache MXNet (incubating) code. To use an existing MXNet inference script, import the eimx Python package and make one change in the code to partition your model and optimize it for the EIA back end.

Note

This topic covers using Elastic Inference enabled MXNet version 1.7.0 and later. For information about using Elastic Inference enabled MXNet 1.5.1 and earlier, see MXNet Elastic Inference 1.5.1 with Python.

Elastic Inference Enabled Apache MXNet

For more information on MXNet set up, see Apache MXNet on AWS.

Preinstalled Elastic Inference Enabled MXNet

Elastic Inference enabled Apache MXNet is available in the AWS Deep Learning AMI, and in Docker containers in Amazon Deep Learning Containers.

Installing Elastic Inference Enabled MXNet

If you're not using an AWS Deep Learning AMI instance, a pip package is available at Elastic Inference Enabled MXNet so you can build it in to your own Amazon Linux or Ubuntu AMIs. For EI MXNet 1.7.0 and later, the name of the wheel starts with eimx, and the number after eimx indicates the version. The following is an example of how to install the wheel:

wget https://amazonei-apachemxnet.s3.amazonaws.com/eimx-version-py2.py3-none-manylinux1_x86_64.whl pip install eimx-version-py2.py3-none-manylinux1_x86_64.whl

If you are using Elastic Inference Enabled MXNet 1.5.1, see MXNet Elastic Inference 1.5.1 with Python.

Activate the MXNet Elastic Inference Environment

If you are using the AWS Deep Learning AMI, activate the Python 3 MXNet Elastic Inference environment by using the following command.

source activate amazonei_mxnet_p36

If you are using a different AMI or a container, access the environment where MXNet is installed.

Validate MXNet for Elastic Inference Setup

If you launched your instance with the Deep Learning AMI (DLAMI), run the following command to to verify that the instance is correctly configured:

$ python ~/anaconda3/bin/EISetupValidator.py

You can also download the EISetupValidator.py script and run python EISetuValidator.py.

If your instance is not properly set up with an accelerator, running any of the examples in this section will result in the following error:

Error: Failed to query accelerator metadata. Failed to detect any accelerator

For detailed instructions on how to launch an AWS Deep Learning AMI with an Elastic Inference accelerator, see the Setting Up to Launch Amazon EC2 with Elastic Inference.

Check MXNet for Elastic Inference Version

You can verify that MXNet is available to use and check the current version with the following code from the Python terminal:

>>> import mxnet as mx >>> mx.__version__ '1.7.0'

This returns the version equivalent to the regular non-Elastic Inference version of MXNet available from GitHub

Check the accelerator library version with the following code:

>>> import mxnet >>> import eimx >>> eimx.__version__ 'eimx-1.0'

You can then compare the commit hash with the Release Notes to find the specific info about the version you have.

Using Multiple Elastic Inference Accelerators with MXNet

You can run inference on MXNet when multiple Elastic Inference accelerators are attached to a single Amazon EC2 instance. The procedure for using multiple accelerators is the same as using multiple GPUs with MXNet.

Use the built-in EI Tool binary to get the device ordinal number of all attached Elastic Inference accelerators. For more information on EI Tool, see Monitoring Elastic Inference Accelerators.

/opt/amazon/ei/ei_tools/bin/ei describe-accelerators --json

Your output should look like the following:

{ "ei_client_version": "1.5.0", "time": "Fri Nov 1 03:09:38 2019", "attached_accelerators": 2, "devices": [ { "ordinal": 0, "type": "eia1.xlarge", "id": "eia-679e4c622d584803aed5b42ab6a97706", "status": "healthy" }, { "ordinal": 1, "type": "eia1.xlarge", "id": "eia-6c414c6ee37a4d93874afc00825c2f28", "status": "healthy" } ] }

In the call to optimize_for specify the dev_id argument with the device ordinal for your desired Elastic Inference accelerator as follows.

sym, arg_params, aux_params = mx.model.load_checkpoint('resnet-152', 0) sym = sym.optimize_for("EIA", dev_id=dev_id) mod = mx.mod.Module(symbol=sym, context=mx.cpu()), label_names=None) mod.bind(for_training=False, data_shapes=[('data', (1,3,224,224))], label_shapes=mod._label_shapes) mod.set_params(arg_params, aux_params, allow_missing=True) mod.forward(Batch([img]))

Use Elastic Inference with the MXNet Symbol API

Pass EIA as the backend in a call to either of the optimize_for() methods. For information, see MXNet Symbol API.

Use the mx.cpu() method with the bind call as shown in the following example. context:

import mxnet as mx import eimx data = mx.sym.var('data', shape=(1,)) sym = mx.sym.exp(data) sym = sym.optimize_for("EIA") executor = sym.simple_bind(ctx=mx.cpu(), grad_req='null') for i in range(10): # Forward call is performed on remote accelerator executor.forward(data=mx.nd.ones((1,))) print('Inference %d, output = %s' % (i, executor.outputs[0]))

The following example calls the bind() method:

import mxnet as mx import eimx a = mx.sym.Variable('a') b = mx.sym.Variable('b') c = 2 * a + b # Even for execution of inference workloads on eia, # context for input ndarrays to be mx.cpu() a_data = mx.nd.array([1,2], ctx=mx.cpu()) b_data = mx.nd.array([2,3], ctx=mx.cpu()) sym = c.optimize_for("EIA") e = sym.bind(mx.cpu(), {'a': a_data, 'b': b_data}) # Forward call is performed on remote accelerator e.forward() print('1st Inference, output = %s' % (e.outputs[0])) # Subsequent calls can pass new data in a forward call e.forward(a=mx.nd.ones((2,)), b=mx.nd.ones((2,))) print('2nd Inference, output = %s' % (e.outputs[0]))

The following example calls the bind() method on a pre-trained real model (Resnet-50) from the Symbol API. Use your preferred text editor to create a script called mxnet_resnet50.py that has the following content. This script downloads the ResNet-50 model files (resnet-50-0000.params and resnet-50-symbol.json), list of labels(synset.txt) and an image of a cat. The cat image is used to get a prediction result from the pre-trained model. This result is looked up in the list of labels, returning a prediction result.

import mxnet as mx import eimx import numpy as np path='http://data.mxnet.io/models/imagenet/' [mx.test_utils.download(path+'resnet/50-layers/resnet-50-0000.params'), mx.test_utils.download(path+'resnet/50-layers/resnet-50-symbol.json'), mx.test_utils.download(path+'synset.txt')] ctx = mx.cpu() with open('synset.txt', 'r') as f: labels = [l.rstrip() for l in f] sym, args, aux = mx.model.load_checkpoint('resnet-50', 0) sym = sym.optimize_for("EIA") # partition the symbol with EIA backend fname = mx.test_utils.download('https://github.com/dmlc/web-data/blob/master/mxnet/doc/tutorials/python/predict_image/cat.jpg?raw=true') img = mx.image.imread(fname)# convert into format (batch, RGB, width, height) img = mx.image.imresize(img, 224, 224) # resize img = img.transpose((2, 0, 1)) # Channel first img = img.expand_dims(axis=0) # batchify img = img.astype(dtype='float32') args['data'] = img softmax = mx.nd.random_normal(shape=(1,)) args['softmax_label'] = softmax exe = sym.bind(ctx=ctx, args=args, aux_states=aux, grad_req='null') exe.forward(data=img) prob = exe.outputs[0].asnumpy()# print the top-5 prob = np.squeeze(prob) a = np.argsort(prob)[::-1] for i in a[0:5]: print('probability=%f, class=%s' %(prob[i], labels[i]))

Then run the script, and you should see something similar to the following output. MXNet will optimize the model graph for Elastic Inference, load it on Elastic Inference accelerator, and then run inference against it:

src/eia_lib.cc:264 MXNet version 10700 supported [17:54:11] src/nnvm/legacy_json_util.cc:209: Loading symbol saved by previous version v0.8.0. Attempting to upgrade... [17:54:11] src/nnvm/legacy_json_util.cc:217: Symbol successfully upgraded! Using Amazon Elastic Inference Client Library Version: 1.8.0 Number of Elastic Inference Accelerators Available: 1 Elastic Inference Accelerator ID: eia-############################### Elastic Inference Accelerator Type: eiaX.YYYYYY Elastic Inference Accelerator Ordinal: 0 [17:54:11] src/executor/graph_executor.cc:2061: Subgraph backend MKLDNN is activated. probability=0.418679, class=n02119789 kit fox, Vulpes macrotis probability=0.293495, class=n02119022 red fox, Vulpes vulpes probability=0.029321, class=n02120505 grey fox, gray fox, Urocyon cinereoargenteus probability=0.026230, class=n02124075 Egyptian cat probability=0.022557, class=n02085620 Chihuahua

Use Elastic Inference with the MXNet Module API

Pass EIA as the backend in a call to either the optimize_for() methods. For information, see Module API.

To use the MXNet Module API, use the following commands:

# Load saved model sym, arg_params, aux_params = mx.model.load_checkpoint(model_path, EPOCH_NUM) sym = sym.optimize_for('EIA') mod = mx.mod.Module(symbol=sym, context=mx.cpu()) # Only for_training = False is supported for eia mod.bind(for_training=False, data_shapes=data_shape) mod.set_params(arg_params, aux_params) # forward call is performed on remote accelerator mod.forward(data_batch)

The following script downloads two ResNet-152 model files (resnet-152-0000.params and resnet-152-symbol.json) and a labels list (synset.txt). It also downloads a cat image to get a prediction result from the pre-trained model, then looks this up in the result in labels list, returning a prediction result.

import mxnet as mx import eimx import numpy as np from collections import namedtuple Batch = namedtuple('Batch', ['data']) path='http://data.mxnet.io/models/imagenet/' [mx.test_utils.download(path+'resnet/152-layers/resnet-152-0000.params'), mx.test_utils.download(path+'resnet/152-layers/resnet-152-symbol.json'), mx.test_utils.download(path+'synset.txt')] ctx = mx.cpu() sym, arg_params, aux_params = mx.model.load_checkpoint('resnet-152', 0) sym = sym.optimize_for('EIA') mod = mx.mod.Module(symbol=sym, context=ctx, label_names=None) mod.bind(for_training=False, data_shapes=[('data', (1,3,224,224))], label_shapes=mod._label_shapes) mod.set_params(arg_params, aux_params, allow_missing=True) with open('synset.txt', 'r') as f: labels = [l.rstrip() for l in f] fname = mx.test_utils.download('https://github.com/dmlc/web-data/blob/master/mxnet/doc/tutorials/python/predict_image/cat.jpg?raw=true') img = mx.image.imread(fname) # convert into format (batch, RGB, width, height) img = mx.image.imresize(img, 224, 224) # resize img = img.transpose((2, 0, 1)) # Channel first img = img.expand_dims(axis=0) # batchify mod.forward(Batch([img])) prob = mod.get_outputs()[0].asnumpy()# print the top-5 prob = np.squeeze(prob) a = np.argsort(prob)[::-1] for i in a[0:5]: print('probability=%f, class=%s' %(prob[i], labels[i]))

Use Elastic Inference with the MXNet Gluon API

The Gluon API in MXNet provides a clear, concise, and easy-to-use API for building and training machine learning models. For more information, see the Gluon Documentation.

To use the MXNet Gluon API model for inference-only tasks, use mx.cpu() for the context and pass EIA as the backend when calling hybridize() using the following commands:

import mxnet as mx import eimx from mxnet.gluon import nn def create(): net = nn.HybridSequential() net.add(nn.Dense(2)) return net # get a simple Gluon nn model net = create() net.initialize(ctx=mx.cpu()) # hybridize the model with static alloc and EIA backend net.hybridize(backend='EIA', static_alloc=True, static_shape=True) # allocate input array and run inference x = mx.nd.random.uniform(-1,1,(3,4),ctx=mx.cpu()) predictions = net(x) print(predictions)

You should be able to see the following output to confirm that you are using Elastic Inference:

Using Amazon Elastic Inference Client Library Version: xxxxxxxx Number of Elastic Inference Accelerators Available: 1 Elastic Inference Accelerator ID: eia-xxxxxxxxxxxxxxxxxxxxxxxx Elastic Inference Accelerator Type: xxxxxxxx

Loading parameters

There are a couple of different ways to load Gluon models. One way is to load model parameters from a file and call hybridize with an EIA backend. For example:

# save the parameters to a file net.save_parameters('params.gluon') # create a new network using saved parameters net2 = create() net2.load_parameters('params.gluon', ctx=mx.cpu()) net2.hybridize(backend="EIA", static_alloc=True, static_shape=True) predictions = net2(x) print(predictions)

Loading Symbol and Parameters Files

You can also export the model’s symbol and parameters to a file, then import the model as shown in the following:

# export both symbol and parameters to a file net2.export('export') # create a new network using exported network net3 = nn.SymbolBlock.imports('export-symbol.json', ['data'], 'export-0000.params', ctx=mx.cpu()) net3.hybridize(backend="EIA", static_alloc=True, static_shape=True) predictions = net3(x)

If you have a model exported as symbol and parameter files, you can simply import those files and run inference.

import mxnet as mx import eimx import numpy as np from mxnet.gluon import nn ctx = mx.cpu() path='http://data.mxnet.io/models/imagenet/' [mx.test_utils.download(path+'resnet/50-layers/resnet-50-0000.params'), mx.test_utils.download(path+'resnet/50-layers/resnet-50-symbol.json'), mx.test_utils.download(path+'synset.txt')] with open('synset.txt', 'r') as f: labels = [l.rstrip() for l in f] fname = mx.test_utils.download('https://github.com/dmlc/web-data/blob/master/mxnet/doc/tutorials/python/predict_image/cat.jpg?raw=true') img = mx.image.imread(fname) # convert into format (batch, RGB, width, height) img = img.as_in_context(ctx) img = mx.image.imresize(img, 224, 224) # resize img = img.transpose((2, 0, 1)) # channel first img = img.expand_dims(axis=0) # batchify img = img.astype(dtype='float32') # match data type resnet50 = nn.SymbolBlock.imports('resnet-50-symbol.json',['data','softmax_label'], 'resnet-50-0000.params',ctx=ctx) # import hybridized model symbols label = mx.nd.array([0], ctx=ctx) # dummy softmax label resnet50.hybridize(backend="EIA", static_alloc=True, static_shape=True) # hybridize with EIA as backend prob = resnet50(img, label) idx = prob.topk(k=5)[0] for i in idx: i = int(i.asscalar()) print('With prob = %.5f, it contains %s' % (prob[0,i].asscalar(), labels[i]))

Loading From Model Zoo

You can also use pre-trained models from Gluon model zoo as shown in the following:

Note

All pre-trained models expect inputs to be normalized in the same way as described in the model zoo documentation.

import mxnet as mx import eimx import numpy as np from mxnet.gluon.model_zoo import vision ctx = mx.cpu() mx.test_utils.download('http://data.mxnet.io/models/imagenet/synset.txt') with open('synset.txt', 'r') as f: labels = [l.rstrip() for l in f] fname = mx.test_utils.download('https://github.com/dmlc/web-data/blob/master/mxnet/doc/tutorials/python/predict_image/cat.jpg?raw=true') img = mx.image.imread(fname) # convert into format (batch, RGB, width, height) img = img.as_in_context(ctx) # image must be with EIA context img = mx.image.imresize(img, 224, 224) # resize img = mx.image.color_normalize(img.astype(dtype='float32')/255, mean=mx.nd.array([0.485, 0.456, 0.406]), std=mx.nd.array([0.229, 0.224, 0.225])) # normalize img = img.transpose((2, 0, 1)) # channel first img = img.expand_dims(axis=0) # batchify resnet50 = vision.resnet50_v1(pretrained=True, ctx=ctx) resnet50.hybridize(backend="EIA", static_alloc=True, static_shape=True) # hybridize with EIA as backend prob = resnet50(img).softmax() # predict and normalize output idx = prob.topk(k=5)[0] # get top 5 resultfor i in idx: for i in idx: i = int(i.asscalar()) print('With prob = %.5f, it contains %s' % (prob[0,i].asscalar(), labels[i]))

Troubleshooting

  • When you call sym.optimize_for('EIA'), if you get the following error message:

    [22:00:31] src/c_api/c_api_symbolic.cc:1498: Error optimizing for backend 'EIA' cannot be found

    You might have forgotten to import the eimx package.

  • When you run inference, if you do not see the folowing EIA initialization message:

    Using Amazon Elastic Inference Client Library Version: 1.8.0 Number of Elastic Inference Accelerators Available: 1 Elastic Inference Accelerator ID: eia-22cb7576547447dbb5718cbfe4e3f0ce Elastic Inference Accelerator Type: eia2.xlarge Elastic Inference Accelerator Ordinal: 0

    You might have forgotten to call sym.optimize_for('EIA') or block.hybridize(backend='EIA') to prepare your model for running on EIA. If it’s not called, the inference just runs on CPU instead of Elastic Inference accelerators.

  • If you upgrade from an earlier version and you get the following error:

    Traceback (most recent call last): File "<stdin>", line 1, in module AttributeError: module 'mxnet' has no attribute 'eia'

    You might still have the legacy mx.eia() in your code. Replace instances of mx.eia() with mx.cpu() if you are using version 1.7.0 or later.

  • Elastic Inference is only for production inference use cases and does not support any model training. When you use either the Symbol API or the Module API, do not call the backward() method or call bind() with for_training=True. Because the default value of for_training is True, make sure you set for_training=False manually in cases such as the example in Use Elastic Inference with the MXNet Module API.

  • For Gluon, do not call training-specific functions or you will receive the following error:

    Using Amazon Elastic Inference Client Library Version: 1.8.0 Number of Elastic Inference Accelerators Available: # Elastic Inference Accelerator ID: eia-#################### Elastic Inference Accelerator Type: eia#.##### Elastic Inference Accelerator Ordinal:# Error! Operator does not support backward Traceback (most recent call last): File "gluon_train.py", line 130, in module train(opt.epochs, ctx) File "gluon_train.py", line 110, in train metric.update([label], [output]) File "/home/ubuntu/anaconda3/envs/amazonei_mxnet_p36/lib/python3.6/site-packages/mxnet/metric.py", line 493, in update pred_label = pred_label.asnumpy().astype('int32') File "/home/ubuntu/anaconda3/envs/amazonei_mxnet_p36/lib/python3.6/site-packages/mxnet/ndarray/ndarray.py", line 2566, in asnumpy ctypes.c_size_t(data.size))) File "/home/ubuntu/anaconda3/envs/amazonei_mxnet_p36/lib/python3.6/site-packages/mxnet/base.py", line 246, in check_call raise get_last_ffi_error() mxnet.base.MXNetError: Traceback (most recent call last): File "src/c_api/c_api.cc", line 318 MXNetError: Check failed: callFStatefulComp(stateful_forward_flag, state_op_inst, in_shapes.data(), in_dims.data(), in_data.data(), in_types.data(), in_verIDs.data(), in_dev_type.data(), in_dev_id.data(), in_data.size(), out_shapes.data(), out_dims.data(), out_data.data(), out_types.data(), out_verIDs.data(), out_dev_type.data(), out_dev_id.data(), out_data.size(), cpu_malloc, &cpu_alloc, gpu_malloc, &gpu_alloc, cuda_stream, sparse_malloc, &sparse_alloc, in_stypes.data(), out_stypes.data(), in_indices.data(), out_indices.data(), in_indptr.data(), out_indptr.data(), in_indices_shapes.data(), out_indices_shapes.data(), in_indptr_shapes.data(), out_indptr_shapes.data(), rng_cpu_states, rng_gpu_states): Error calling FStatefulCompute for custom operator '_eia_subgraph_op'
  • Because training is not allowed, there is no point of initializing an optimizer for inference.

  • A model trained on an earlier version of MXNet will work on a later version of MXNet Elastic Inference because it is backwards compatible (e.g. train model on MXNet 1.3 and run on MXNet Elastic Inference 1.4). However, you may run into undefined behavior if you train on a later version of MXNet (e.g. train model on MXNet Master and run on MXNet EI 1.4)

  • Different sizes of Elastic Inference accelerators have different amounts of GPU memory. If your model requires more GPU memory than is available in your accelerator, you get a message that looks like the log below. If you run into this message, you should use a larger accelerator size with more memory. Stop and restart your instance with a larger accelerator.

    mxnet.base.MXNetError: [06:16:17] src/operator/subgraph/eia/eia_subgraph_op.cc:206: Last Error: EI Error Code: [51, 8, 31] EI Error Description: Accelerator out of memory. Consider using a larger accelerator. EI Request ID: MX-A19B0DE6-7999-4580-8C49-8EA 7ADSFFCB -- EI Accelerator ID: eia-cb0aasdfdfsdf2a acab7 EI Client Version: 1.2.12
  • For Gluon, make sure you hybridize the model and pass the static_alloc=True and static_shape=True options. Otherwise, each inference loads the model once which causes potential performance degradation and OOM errors. See above to know more about the OOM errors.

  • Calling reshape explicitly by using either the Module or the Symbol API, or implicitly using different shapes for input NDArrays in different forward passes can lead to OOM errors. Before being reshaped, the model is not cleaned up on the accelerator until the session is destroyed. In Gluon, inferring with inputs of differing shapes will result in the model re-allocating memory. For Elastic Inference, this means that the model will be re-loaded on the accelerator leading to performance degradation and potential OOM errors. You can either pad your data so all shapes are the same or bind the model with different shapes to use multiple executors. The latter option may result in out-of-memory errors because the model is duplicated on the accelerator.

    [Fri Feb 19 01:47:49 2021, 397658us] [Execution Engine][MXNet][3] Failed - Last Error: EI Error Code: [51, 8, 31] EI Error Description: Accelerator out of memory. Consider using a larger accelerator. EI Request ID: MX-78E568D8-9105-468A-8E1C-7D1FFDF9934E -- EI Accelerator ID: eia-09803cc86d4044e6b4e8d4a8ecd0267e EI Client Version: 1.8.0 src/eia_lib.cc:88 Error: Last Error: EI Error Code: [51, 8, 31] EI Error Description: Accelerator out of memory. Consider using a larger accelerator. EI Request ID: MX-78E568D8-9105-468A-8E1C-7D1FFDF9934E -- EI Accelerator ID: eia-09803cc86d4044e6b4e8d4a8ecd0267e EI Client Version: 1.8.0
  • If you get an error importing the eimx package similar to the following:

    Traceback (most recent call last): File "<stdin>", line 1, in module File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/eimx/__init__.py", line 20, in module mxnet.library.load(path_lib, debug) File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/mxnet/library.py", line 56, in load check_call(_LIB.MXLoadLib(chararr, mx_uint(verbose_val))) File "/home/ubuntu/anaconda3/lib/python3.6/site-packages/mxnet/base.py", line 246, in check_call raise get_last_ffi_error() mxnet.base.MXNetError: Traceback (most recent call last): File "src/c_api/c_api.cc", line 1521 MXNetError: Library version (7) does not match MXNet version (10)

    You might be using the wrong version of MXNet. MXNet release 1.7.0 uses version 7 and MXNet release 1.8.0 uses version 10. The eimx-1.0 package must be used with MXNet release 1.7.0 only.

  • If you get an error importing the eimx package similar to either of the following:

    Traceback (most recent call last): File "<stdin>", line 1, in module File "/home/ubuntu/.local/lib/python3.6/site-packages/eimx/__init__.py", line 20, in module mxnet.library.load(path_lib, debug) AttributeError: module 'mxnet' has no attribute 'library'
    Traceback (most recent call last): File "<stdin>", line 1, in module File "/home/ubuntu/.local/lib/python3.6/site-packages/eimx/__init__.py", line 20, in module mxnet.library.load(path_lib, debug) TypeError: load() takes 1 positional argument but 2 were given

    You might be using an older version of MXNet. Please check that you’re using an installation of MXNet release 1.7.0 for the eimx-1.0 package. After installing the correct version of MXNet you should see the following message after importing the eimx package successfully:

    src/eia_lib.cc:264 MXNet version 10700 supported
  • If you get an error similar the following:

    [22:26:23] src/executor/graph_executor.cc:1981: Subgraph backend MKLDNN is activated. python: /root/deps/aws-sdk-cpp/aws-cpp-sdk-core/source/utils/UUID.cpp:83: static Aws::Utils::UUID Aws::Utils::UUID::RandomUUID(): Assertion `secureRandom' failed. Aborted (core dumped)

    You tried to save the model after running sym.optimize_for('EIA') and reload that model later. Currently models optimized for EIA cannot be saved and reloaded. You must call sym.optimize_for('EIA') every time after reloading your model from disk at the beginning of your script. The time it takes to partition your model and optimize it for EIA is relatively small, so there is no benefit from trying to save/reload anyway.