diff --git a/doc/source/user_guide/tutorials/import_data/extract_and_explore_results_data.rst b/doc/source/user_guide/tutorials/import_data/extract_and_explore_results_data.rst new file mode 100644 index 0000000000..38aea40a36 --- /dev/null +++ b/doc/source/user_guide/tutorials/import_data/extract_and_explore_results_data.rst @@ -0,0 +1,166 @@ +.. _ref_tutorials_extract_and_explore_results_data: + +================================ +Extract and explore results data +================================ + +:bdg-mapdl:`MAPDL` :bdg-lsdyna:`LS-DYNA` :bdg-fluent:`FLUENT` :bdg-cfx:`CFX` + +.. include:: ../../../links_and_refs.rst +.. |get_entity_data| replace:: :func:`get_entity_data()` +.. |get_entity_data_by_id| replace:: :func:`get_entity_data_by_id()` + +This tutorial shows how to extract and explore results data from a result file. + +When you extract a result from a result file DPF stores it in a |Field|. +Thus, this |Field| contains the data of the result associated with it. + +.. note:: + + When DPF-Core returns the |Field| object, what Python actually has is a client-side + representation of the |Field|, not the entirety of the |Field| itself. This means + that all the data of the field is stored within the DPF service. This is important + because when building your workflows, the most efficient way of interacting with result data + is to minimize the exchange of data between Python and DPF, either by using operators + or by accessing exclusively the data that is needed. + +:jupyter-download-script:`Download tutorial as Python script` +:jupyter-download-notebook:`Download tutorial as Jupyter notebook` + +Get the result file +------------------- + +First, import a result file. For this tutorial, you can use one available in the |Examples| module. +For more information about how to import your own result file in DPF, see the :ref:`ref_tutorials_import_result_file` +tutorial. + +Here, we extract the displacement results. The displacement |Result| object gives a |FieldsContainer| when evaluated. +Thus, we get a |Field| from this |FieldsContainer|. + +.. jupyter-execute:: + + # Import the ``ansys.dpf.core`` module + from ansys.dpf import core as dpf + # Import the examples module + from ansys.dpf.core import examples + # Import the operators module + from ansys.dpf.core import operators as ops + + # Define the result file path + result_file_path_1 = examples.download_transient_result() + + # Create the model + model_1 = dpf.Model(data_sources=result_file_path_1) + + # Extract the displacement results for the last time step + disp_results = model_1.results.displacement.on_last_time_freq.eval() + + # Get the displacement field for the last time step + disp_field = disp_results[0] + + # Print the displacement Field + print(disp_field) + +Extract all the data from a |Field| +----------------------------------- + +You can extract the entire data in a |Field| as: + +- An array (numpy array); +- A list. + +Data as an array +^^^^^^^^^^^^^^^^ + +.. jupyter-execute:: + + # Get the displacement data as an array + data_array = disp_field.data + + # Print the data as an array + print("Displacement data as an array: ", '\n', data_array) + +Note that this array is a genuine, local, numpy array (overloaded by the DPFArray): + +.. jupyter-execute:: + + # Print the array type + print("Array type: ", type(data_array)) + +Data as a list +^^^^^^^^^^^^^^ + +.. jupyter-execute:: + + # Get the displacement data as a list + data_list = disp_field.data_as_list + # Print the data as a list + print("Displacement data as a list: ", '\n', data_list) + +Extract specific data from a field +---------------------------------- + +If you need to access data for specific entities (node, element ...), you can extract it with two approaches: + +- :ref:`Based on its index ` (data position on the |Field|) by using the |get_entity_data| method; +- :ref:`Based on the entities id ` by using the |get_entity_data_by_id| method. + +The |Field| data is organized with respect to its scoping ids. Note that the element with id=533 +would correspond to an index=2 within the |Field|. + +.. jupyter-execute:: + + # Get the index of the entity with id=533 + index_533_entity = disp_field.scoping.index(id=533) + # Print the index + print("Index entity id=533: ",index_533_entity) + +Be aware that scoping IDs are not sequential. You would get the id of the element in the 533 +position of the |Field| with: + +.. jupyter-execute:: + + # Get the id of the entity with index=533 + id_533_entity = disp_field.scoping.id(index=533) + print("Id entity index=533: ",id_533_entity) + +.. _ref_extract_specific_data_by_index: + +Get the data by the entity index +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +.. jupyter-execute:: + + # Get the data from the third entity in the field + data_3_entity = disp_field.get_entity_data(index=3) + # Print the data + print("Data entity index=3: ", data_3_entity) + +.. _ref_extract_specific_data_by_id: + +Get the data by the entity id +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +.. jupyter-execute:: + + # Get the data from the entity with id=533 + data_533_entity = disp_field.get_entity_data_by_id(id=533) + # Print the data + print("Data entity id=533: ", data_533_entity) + +Extract specific data from a field using a loop over the array +-------------------------------------------------------------- + +While the methods above are acceptable when requesting data for a few elements +or nodes, they should not be used when looping over the entire array. For efficiency, +a |Field| data can be recovered locally before sending a large number of requests: + +.. jupyter-execute:: + + # Create a deep copy of the field that can be accessed and modified locally. + with disp_field.as_local_field() as f: + for i in disp_field.scoping.ids[2:50]: + f.get_entity_data_by_id(i) + + # Print the field + print(f) \ No newline at end of file diff --git a/doc/source/user_guide/tutorials/import_data/extract_and_explore_results_metadata.rst b/doc/source/user_guide/tutorials/import_data/extract_and_explore_results_metadata.rst new file mode 100644 index 0000000000..32f0fa0228 --- /dev/null +++ b/doc/source/user_guide/tutorials/import_data/extract_and_explore_results_metadata.rst @@ -0,0 +1,159 @@ +.. _ref_tutorials_extract_and_explore_results_metadata: + +==================================== +Extract and explore results metadata +==================================== + +:bdg-mapdl:`MAPDL` :bdg-lsdyna:`LS-DYNA` :bdg-fluent:`FLUENT` :bdg-cfx:`CFX` + +.. include:: ../../../links_and_refs.rst +.. |ResultInfo| replace:: :class:`ResultInfo` + +This tutorial shows how to extract and explore results metadata from a result file. + +:jupyter-download-script:`Download tutorial as Python script` +:jupyter-download-notebook:`Download tutorial as Jupyter notebook` + +Get the result file +------------------- + +First, import a result file. For this tutorial, you can use one available in the |Examples| module. +For more information about how to import your own result file in DPF, see the :ref:`ref_tutorials_import_result_file` +tutorial. + +.. jupyter-execute:: + + # Import the ``ansys.dpf.core`` module + from ansys.dpf import core as dpf + # Import the examples module + from ansys.dpf.core import examples + # Import the operators module + from ansys.dpf.core import operators as ops + + # Define the result file path + result_file_path_1 = examples.download_transient_result() + # Create the model + model_1 = dpf.Model(data_sources=result_file_path_1) + +Explore the results general metadata +------------------------------------ + +You can explore the general results metadata, before extracting the results, by using +the |ResultInfo| object and its methods. This metadata includes: + +- Analysis type; +- Physics type; +- Number of results; +- Unit system; +- Solver version, date and time; +- Job name; + +.. jupyter-execute:: + + # Define the ResultInfo object + result_info_1 = model_1.metadata.result_info + + # Get the analysis type + analysis_type = result_info_1.analysis_type + # Print the analysis type + print("Analysis type: ",analysis_type, "\n") + + # Get the physics type + physics_type = result_info_1.physics_type + # Print the physics type + print("Physics type: ",physics_type, "\n") + + # Get the number of available results + number_of_results = result_info_1.n_results + # Print the number of available results + print("Number of available results: ",number_of_results, "\n") + + # Get the unit system + unit_system = result_info_1.unit_system + # Print the unit system + print("Unit system: ",unit_system, "\n") + + # Get the solver version, data and time + solver_version = result_info_1.solver_version + solver_date = result_info_1.solver_date + solver_time = result_info_1.solver_time + + # Print the solver version, data and time + print("Solver version: ",solver_version, "\n") + print("Solver date: ", solver_date, "\n") + print("Solver time: ",solver_time, "\n") + + # Get the job name + job_name = result_info_1.job_name + # Print the job name + print("Job name: ",job_name, "\n") + +Explore a result metadata +------------------------- +When you extract a result from a result file DPF stores it in a |Field|. +Thus, this |Field| contains the metadata for the result associated with it. +This metadata includes: + +- Location; +- Scoping (type and quantity of entities); +- Elementary data count (number of entities, how many data vectors we have); +- Components count (vectors dimension, here we have a displacement so we expect to have 3 components (X, Y and Z)); +- Shape of the data stored (tuple with the elementary data count and the components count); +- Fields size (length of the data entire vector (equal to the number of elementary data times the number of components)); +- Units of the data. + +Here we will explore the metadata of the displacement results. + +Start by extracting the displacement results. + +.. jupyter-execute:: + + # Extract the displacement results + disp_results = model_1.results.displacement.eval() + + # Get the displacement field + disp_field = disp_results[0] + +Explore the displacement results metadata: + +.. jupyter-execute:: + + # Get the location of the displacement data + location = disp_field.location + # Print the location + print("Location: ", location,'\n') + + # Get the displacement Field scoping + scoping = disp_field.scoping + # Print the Field scoping + print("Scoping: ", '\n',scoping, '\n') + + # Get the displacement Field scoping ids + scoping_ids = disp_field.scoping.ids # Available entities ids + # Print the Field scoping ids + print("Scoping ids: ", scoping_ids, '\n') + + # Get the displacement Field elementary data count + elementary_data_count = disp_field.elementary_data_count + # Print the elementary data count + print("Elementary data count: ", elementary_data_count, '\n') + + # Get the displacement Field components count + components_count = disp_field.component_count + # Print the components count + print("Components count: ", components_count, '\n') + + # Get the displacement Field size + field_size = disp_field.size + # Print the Field size + print("Size: ", field_size, '\n') + + # Get the displacement Field shape + shape = disp_field.shape + # Print the Field shape + print("Shape: ", shape, '\n') + + # Get the displacement Field unit + unit = disp_field.unit + # Print the displacement Field unit + print("Unit: ", unit, '\n') \ No newline at end of file diff --git a/doc/source/user_guide/tutorials/import_data/import_result_file.rst b/doc/source/user_guide/tutorials/import_data/import_result_file.rst new file mode 100644 index 0000000000..f8d22047e2 --- /dev/null +++ b/doc/source/user_guide/tutorials/import_data/import_result_file.rst @@ -0,0 +1,363 @@ +.. _ref_tutorials_import_result_file: + +=========================== +Import a result file in DPF +=========================== + +:bdg-mapdl:`MAPDL` :bdg-lsdyna:`LS-DYNA` :bdg-fluent:`FLUENT` :bdg-cfx:`CFX` + +.. include:: ../../../links_and_refs.rst +.. |set_result_file_path| replace:: :func:`set_result_file_path() ` +.. |add_file_path| replace:: :func:`add_file_path() ` + +This tutorial shows how to import a result file in DPF. + +There are two approaches to import a result file in DPF: + +- :ref:`Using the DataSources object ` +- :ref:`Using the Model object ` + +.. note:: + + The |Model| extracts a large amount of information by default (results, mesh and analysis data). + If using this helper takes a long time for processing the code, mind using a |DataSources| object + and instantiating operators directly with it. + +:jupyter-download-script:`Download tutorial as Python script` +:jupyter-download-notebook:`Download tutorial as Jupyter notebook` + +Define the result file path +--------------------------- + +Both approaches need a file path to be defined. For this tutorial, you can use a result file available in +the |Examples| module. + +.. tab-set:: + + .. tab-item:: MAPDL + + .. jupyter-execute:: + + # Import the ``ansys.dpf.core`` module + from ansys.dpf import core as dpf + # Import the examples module + from ansys.dpf.core import examples + # Import the operators module + from ansys.dpf.core import operators as ops + + # Define the .rst result file path + result_file_path_11 = examples.find_static_rst() + + # Define the modal superposition harmonic analysis (.mode, .rfrq and .rst) result files paths + result_file_path_12 = examples.download_msup_files_to_dict() + + + # Print the result files paths + print("Result file path 11:", "\n",result_file_path_11, "\n") + print("Result files paths 12:", "\n",result_file_path_12, "\n") + + .. tab-item:: LSDYNA + + .. jupyter-execute:: + + # Import the ``ansys.dpf.core`` module + from ansys.dpf import core as dpf + # Import the examples module + from ansys.dpf.core import examples + # Import the operators module + from ansys.dpf.core import operators as ops + + # Define the .d3plot result files paths + result_file_path_21 = examples.download_d3plot_beam() + + # Define the .binout result file path + result_file_path_22 = examples.download_binout_matsum() + + # Print the result files paths + print("Result files paths 21:", "\n",result_file_path_21, "\n") + print("Result file path 22:", "\n",result_file_path_22, "\n") + + .. tab-item:: Fluent + + .. jupyter-execute:: + + # Import the ``ansys.dpf.core`` module + from ansys.dpf import core as dpf + # Import the examples module + from ansys.dpf.core import examples + # Import the operators module + from ansys.dpf.core import operators as ops + + # Define the project .flprj result file path + result_file_path_31 = examples.download_fluent_axial_comp()["flprj"] + + # Define the CFF .cas.h5/.dat.h5 result files paths + result_file_path_32 = examples.download_fluent_axial_comp() + + # Print the result files paths + print("Result file path 31:", "\n",result_file_path_31, "\n") + print("Result files paths 32:", "\n",result_file_path_32, "\n") + + .. tab-item:: CFX + + .. jupyter-execute:: + + # Import the ``ansys.dpf.core`` module + from ansys.dpf import core as dpf + # Import the examples module + from ansys.dpf.core import examples + # Import the operators module + from ansys.dpf.core import operators as ops + + # Define the project .res result file path + result_file_path_41 = examples.download_cfx_mixing_elbow() + + # Define the CFF .cas.cff/.dat.cff result files paths + result_file_path_42 = examples.download_cfx_heating_coil() + + # Print the result files paths + print("Result file path 41:", "\n",result_file_path_41, "\n") + print("Result files paths 42:", "\n",result_file_path_42, "\n") + +.. _ref_import_result_file_data_sources: + +Use a |DataSources| +------------------- + +The |DataSources| object manages paths to their files. Use this object to declare data +inputs for PyDPF-Core APIs. + +.. tab-set:: + + .. tab-item:: MAPDL + + **a) `.rst` result file** + + Create the |DataSources| object and give the path to the result file to the *'result_path'* argument. + + .. jupyter-execute:: + + # Create the DataSources object + # Use the ``result_path`` argument and give the result file path + ds_11 = dpf.DataSources(result_path=result_file_path_11) + + **b) `.mode`, `.rfrq` and `.rst` result files** + + In the modal superposition, modal coefficients are multiplied by mode shapes (of a previous modal analysis) + to analyse a structure under given boundary conditions in a range of frequencies. Doing this expansion “on demand” + in DPF instead of in the solver reduces the size of the result files. + + The expansion is recursive in DPF: first the modal response is read. Then, *upstream* mode shapes are found in + the |DataSources|, where they are read and expanded. Upstream refers to a source that provides data to a + particular process. + + To create a recursive workflow add the upstream |DataSources| object, that contains the upstream + data files, to the main |DataSources| object. + + .. jupyter-execute:: + + # Create the main DataSources object + ds_12 = dpf.DataSources() + # Define the main result file path + ds_12.set_result_file_path(filepath=result_file_path_12["rfrq"], key='rfrq') + + # Create the upstream DataSources object with the main upstream file path + upstream_ds_12 = dpf.DataSources(result_path=result_file_path_12["mode"]) + # Add the additional upstream file path to the upstream DataSources object + upstream_ds_12.add_file_path(filepath=result_file_path_12["rst"]) + + # Add the upstream DataSources to the main DataSources object + ds_12.add_upstream(upstream_data_sources=upstream_ds_12) + + .. tab-item:: LSDYNA + + **a) `.d3plot` result file** + + The d3plot file does not contain information related to units. In this case, as the + simulation was run through Mechanical, a ``file.actunits`` file is produced. If this + file is supplemented in the |DataSources|, the units will be correctly fetched for all + results in the file as well as for the mesh. + + Thus, we must use the |set_result_file_path| and the |add_file_path| methods to add the main + and the additional result file to the |DataSources| object. + + .. jupyter-execute:: + + # Create the DataSources object + ds_21 = dpf.DataSources() + + # Define the main result file path + ds_21.set_result_file_path(filepath=result_file_path_21[0], key="d3plot") + + # Add the additional file path related to the units + ds_21.add_file_path(filepath=result_file_path_21[3], key="actunits") + + **b) `.binout` result file** + + The extension key *`.binout`* is not explicitly specified in the result file. Thus, we use + the |set_result_file_path| method and give the extension key to the *'key'* argument to correctly + add the result file path to the |DataSources| object. + + .. jupyter-execute:: + + # Create the DataSources object + ds_22 = dpf.DataSources() + + # Define the path to the result file + # Use the ``key`` argument and give the file extension key + ds_22.set_result_file_path(filepath=result_file_path_22, key="binout") + + .. tab-item:: Fluent + + **a) `.flprj` result file** + + Create the |DataSources| object and give the path to the result file to the *'result_path'* argument. + + .. jupyter-execute:: + + # Create the DataSources object + # Use the ``result_path`` argument and give the result file path + ds_31 = dpf.DataSources(result_path=result_file_path_31) + + **b) `.cas.h5`, `.dat.h5` result files** + + Here, we have a main and an additional result file with two extensions keys. + + Thus, you must use the |set_result_file_path| and the |add_file_path| methods to add the main and + additional result file to the |DataSources| object and explicitly give the *first* extension key to + their *'key'* argument. + + .. jupyter-execute:: + + # Create the DataSources object + ds_32 = dpf.DataSources() + + # Define the path to the main result file + # Use the ``key`` argument and give the first extension key + ds_32.set_result_file_path(filepath=result_file_path_32['cas'][0], key="cas") + + # Add the additional result file path to the DataSources + # Use the ``key`` argument and give the first extension key + ds_32.add_file_path(filepath=result_file_path_32['dat'][0], key="dat") + + .. tab-item:: CFX + + **a) `.res` result file** + + Create the |DataSources| object and give the path to the result file to the *'result_path'* argument. + + .. jupyter-execute:: + + # Create the DataSources object + # Use the ``result_path`` argument and give the result file path + ds_41 = dpf.DataSources(result_path=result_file_path_41) + + **b) `.cas.cff`, `.dat.cff` result files** + + Here, we have a main and an additional result file with two extensions keys. + + Thus, you must use the |set_result_file_path| and the |add_file_path| methods to add the main and + additional result file to the |DataSources| object. Also, you must explicitly give the *first* extension keys to + the *'key'* argument. + + .. jupyter-execute:: + + # Create the DataSources object + ds_42 = dpf.DataSources() + + # Define the path to the main result file + # Use the ``key`` argument and give the first extension key + ds_42.set_result_file_path(filepath=result_file_path_42["cas"], key="cas") + + # Add the additional result file path to the DataSources + # Use the ``key`` argument and give the first extension key + ds_42.add_file_path(filepath=result_file_path_42["dat"], key="dat") + +.. _ref_import_result_file_model: + +Use a |Model| +------------- + +The |Model| is a helper designed to give shortcuts to access the analysis results +metadata and to instanciate results providers by opening a |DataSources| or a Streams. + +To create a |Model| you can provide to the *'data_sources'* argument.: + +- The result file path, in the case you are working with a single result file that has an explicit extension key; +- A |DataSources| object. + +.. tab-set:: + + .. tab-item:: MAPDL + + **a) `.rst` result file** + + .. jupyter-execute:: + + # Create the model with the result file path + model_11 = dpf.Model(data_sources=result_file_path_11) + + # Create the model with the DataSources object + model_12 = dpf.Model(data_sources=ds_11) + + **b) `.mode`, `.rfrq` and `.rst` result files** + + .. jupyter-execute:: + + # Create the model with the DataSources object + model_13 = dpf.Model(data_sources=ds_12) + + .. tab-item:: LSDYNA + + **a) `.d3plot` result file** + + .. jupyter-execute:: + + # Create the model with the DataSources object + model_21 = dpf.Model(data_sources=ds_21) + + **b) `.binout` result file** + + .. jupyter-execute:: + + # Create the model with the DataSources object + model_22 = dpf.Model(data_sources=ds_22) + + .. tab-item:: Fluent + + **a) `.flprj` result file** + + .. jupyter-execute:: + + # Create the model with the result file path + model_31 = dpf.Model(data_sources=result_file_path_31) + + # Create the model with the DataSources object + model_32 = dpf.Model(data_sources=ds_31) + + **b) `.cas.h5`, `.dat.h5` result files** + + .. jupyter-execute:: + + # Create the model with the DataSources object + model_33 = dpf.Model(data_sources=ds_32) + + .. tab-item:: CFX + + **a) `.res` result file** + + .. jupyter-execute:: + + # Create the model with the result file path + model_41 = dpf.Model(data_sources=result_file_path_41) + + # Create the model with the DataSources object + model_42 = dpf.Model(data_sources=ds_41) + + **b) `.cas.cff`, `.dat.cff` result files** + + .. jupyter-execute:: + + # Create the model with the DataSources object + model_43 = dpf.Model(data_sources=ds_42) + diff --git a/doc/source/user_guide/tutorials/import_data/index.rst b/doc/source/user_guide/tutorials/import_data/index.rst index 112339d5a5..b9607fa4e2 100644 --- a/doc/source/user_guide/tutorials/import_data/index.rst +++ b/doc/source/user_guide/tutorials/import_data/index.rst @@ -15,19 +15,12 @@ From user input :padding: 2 :margin: 2 - .. grid-item-card:: Import data from csv file - :link: ref_tutorials + .. grid-item-card:: Load custom data + :link: ref_tutorials_load_custom_data :link-type: ref :text-align: center - Learn how to import data in DPF from csv file - - .. grid-item-card:: Represent your data in DPF - :link: ref_tutorials - :link-type: ref - :text-align: center - - Learn how to represent your manual input data in a DPF data storage structure + Learn how to build DPF data storage structures from custom data. From result files ***************** @@ -37,29 +30,54 @@ From result files :padding: 2 :margin: 2 + .. grid-item-card:: Import a result file in DPF + :link: ref_tutorials_import_result_file + :link-type: ref + :text-align: center + + This tutorial shows how to import a result file in DPF. + + +++ + :bdg-mapdl:`MAPDL` :bdg-lsdyna:`LS-DYNA` :bdg-fluent:`FLUENT` :bdg-cfx:`CFX` + .. grid-item-card:: Extract and explore results metadata - :link: ref_tutorials + :link: ref_tutorials_extract_and_explore_results_metadata :link-type: ref :text-align: center - This tutorial + This tutorial shows how to extract and explore results metadata (analysis type, + physics type, unit system...) from a result file. - .. grid-item-card:: Extract and explore results - :link: ref_tutorials + +++ + :bdg-mapdl:`MAPDL` :bdg-lsdyna:`LS-DYNA` :bdg-fluent:`FLUENT` :bdg-cfx:`CFX` + + .. grid-item-card:: Extract and explore results data + :link: ref_tutorials_extract_and_explore_results_data :link-type: ref :text-align: center - This tutorial + This tutorial shows how to extract and explore results data from a result file. + + +++ + :bdg-mapdl:`MAPDL` :bdg-lsdyna:`LS-DYNA` :bdg-fluent:`FLUENT` :bdg-cfx:`CFX` - .. grid-item-card:: Narrow down data (scoping tutorial) - :link: ref_tutorials + .. grid-item-card:: Narrow down data + :link: reft_tutorials_narrow_down_data :link-type: ref :text-align: center - This tutorial + This tutorial explains how to scope (get a spatial and/or temporal subset of + the simulation data) your results. + +++ + :bdg-mapdl:`MAPDL` :bdg-lsdyna:`LS-DYNA` :bdg-fluent:`FLUENT` :bdg-cfx:`CFX` .. toctree:: :maxdepth: 2 :hidden: + load_custom_data.rst + import_result_file.rst + extract_and_explore_results_metadata.rst + extract_and_explore_results_data.rst + narrow_down_data.rst \ No newline at end of file diff --git a/doc/source/user_guide/tutorials/import_data/load_custom_data.rst b/doc/source/user_guide/tutorials/import_data/load_custom_data.rst new file mode 100644 index 0000000000..cb2737933e --- /dev/null +++ b/doc/source/user_guide/tutorials/import_data/load_custom_data.rst @@ -0,0 +1,697 @@ +.. _ref_tutorials_load_custom_data: + +======================= +Load custom data in DPF +======================= + +.. include:: ../../../links_and_refs.rst +.. |Field.append| replace:: :func:`append()` +.. |Field.data| replace:: :attr:`Field.data` +.. |fields_factory| replace:: :mod:`fields_factory` +.. |fields_container_factory| replace:: :mod:`fields_container_factory` +.. |location| replace:: :class:`location` +.. |nature| replace:: :class:`nature` +.. |dimensionality| replace:: :class:`dimensionality` +.. |Field.dimensionality| replace:: :func:`Field.dimensionality` +.. |Field.location| replace:: :func:`Field.location` +.. |Field.scoping| replace:: :func:`Field.scoping` +.. |field_from_array| replace:: :func:`field_from_array()` +.. |create_scalar_field| replace:: :func:`create_scalar_field()` +.. |create_vector_field| replace:: :func:`create_vector_field()` +.. |create_3d_vector_field| replace:: :func:`create_3d_vector_field()` +.. |create_matrix_field| replace:: :func:`create_matrix_field()` +.. |create_tensor_field| replace:: :func:`create_tensor_field()` +.. |over_time_freq_fields_container| replace:: :func:`over_time_freq_fields_container()` + +This tutorial shows how to represent your custom data in DPF data storage structures. + +To import you custom data in DPF, you must create a DPF data structure to store it. +DPF uses |Field| and |FieldsContainer| objects to handle data. The |Field| is a homogeneous array +and a |FieldsContainer| is a labeled collection of |Field|. For more information on DPF data structures +such as the |Field| and their use check the :ref:`ref_tutorials_data_structures` tutorials section. + +:jupyter-download-script:`Download tutorial as Python script` +:jupyter-download-notebook:`Download tutorial as Jupyter notebook` + +Define the data +--------------- + +In this tutorial, we create different Fields from data stored in Python lists. + +Create the python lists with the data to be *set* to the Fields. + +.. jupyter-execute:: + + # Data for the scalar Fields (lists with 1 and 2 dimensions) + data_1 = [6.0, 5.0, 4.0, 3.0, 2.0, 1.0] + data_2 = [[12.0, 7.0, 8.0], [ 9.0, 31.0, 1.0]] + + # Data for the vector Fields (lists with 1 and 2 dimensions) + data_3 = [4.0, 1.0, 8.0, 5.0, 7.0, 9.0] + data_4 = [6.0, 5.0, 4.0, 3.0, 2.0, 1.0, 9.0, 7.0, 8.0, 10.0] + data_5 = [[8.0, 4.0, 3.0], [31.0, 5.0, 7.0]] + + # Data for the matrix Fields + data_6 = [3.0, 2.0, 1.0, 7.0] + data_7 = [15.0, 3.0, 9.0, 31.0, 1.0, 42.0, 5.0, 68.0, 13.0] + data_8 = [[12.0, 7.0, 8.0], [ 1.0, 4.0, 27.0], [98.0, 4.0, 6.0]] + +Create the python lists with the data to be *appended* to the Fields. + +.. jupyter-execute:: + + # Data for the scalar Fields + data_9 = [24.0] + + # Data for the vector Fields + data_10 = [47.0, 33.0, 5.0] + + # Data for the matrix Fields + data_11 = [8.0, 2.0, 4.0, 64.0, 32.0, 47.0, 11.0, 23.0, 1.0] + + +Create the Fields +----------------- + +In this tutorial, we explain how to create the following Fields: + +- Scalar Field; +- Vector Field; +- Matrix Field. + +.. note:: + + A |Field| must always be given: + + - A |location| and a |Scoping|. + + Here, we create Fields in the default *'Nodal'* |location|. Thus, each entity (here, the nodes) must + have a |Scoping| id, that can be defined in a random or in a numerical order: + + - If you want to *set* a data array to the |Field|, you must previously set the |Scoping| ids using the |Field.scoping| method. + - If you want to *append* an entity with a data array to the |Field|, you don't need to previously set the |Scoping| ids. + + - A |nature| and a |dimensionality| (number of data components for each entity). They must respect the type and size of the + data to be stored in the |Field|. + +Import the PyDPF-Core library +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +First, import the PyDPF-Core library. + +.. jupyter-execute:: + + # Import the ``ansys.dpf.core`` module + from ansys.dpf import core as dpf + +Define the Fields sizing +^^^^^^^^^^^^^^^^^^^^^^^^ + +The second step consists in defining the Fields dimensions. + +.. tab-set:: + + .. tab-item:: Scalar fields + + Here, we create one |Field| with 6 scalar. Thus, 6 entities with one |Scoping| id each. + + .. jupyter-execute:: + + # Define the number of entities + num_entities_1 = 6 + + You must ensure that this |Field| has a *'scalar'* |nature| and an *'1D'* |dimensionality|. + + .. tab-item:: Vector fields + + Here, we create: + + - One |Field| with 2 vectors (thus, 2 entities) of 3 components each (3D vector |Field|); + - One |Field| with 2 vectors (thus, 2 entities) of 5 components each (5D vector |Field|); + + .. jupyter-execute:: + + # Define the number of entities + num_entities_2 = 2 + + You must ensure that these Fields have a *'vector'* |nature| and the corresponding |dimensionality| + (*'3D'* and *'5D'*). + + .. tab-item:: Matrix fields + + Here, we create: + + - One Field with 1 matrix (thus, 1 entity) of 2 lines and 2 columns; + - Two Fields with 1 matrix (thus, 1 entity) of 3 lines and 3 columns (tensor). + + .. jupyter-execute:: + + # Define the number of entities + num_entities_3 = 1 + + You must ensure that these Fields have a *'matrix'* |nature| and the corresponding |dimensionality|. + +Create the Fields objects +^^^^^^^^^^^^^^^^^^^^^^^^^ + +You can create the Fields using two approaches: + +- :ref:`Instantianting the Field object`; +- :ref:`Using the fields_factory module`. + +.. _ref_create_field_instance: + +Create a |Field| by an instance of this object +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. tab-set:: + + .. tab-item:: Scalar fields + + .. jupyter-execute:: + + # Define the number of entities + num_entities_1 = 6 + + You must ensure that this |Field| has a *'scalar'* |nature| and an *'1D'* |dimensionality|. + + For this approach, the default |nature| of the |Field| object is *'vector'*. You can modify it directly with the + *'nature'* argument or with the |Field.dimensionality| method. + + Create the scalar |Field| and use the *'nature'* argument. + + .. jupyter-execute:: + + # Instanciate the Field + field_11 = dpf.Field(nentities=num_entities_1, nature=dpf.common.natures.scalar) + + # Set the scoping ids + field_11.scoping.ids = range(num_entities_1) + + # Print the Field + print("Scalar Field: ", '\n',field_11, '\n') + + Create the scalar |Field| and use the |Field.dimensionality| method. + + .. jupyter-execute:: + + # Instanciate the Field + field_12 = dpf.Field(nentities=num_entities_1) + + # Use the Field.dimensionality method + field_12.dimensionality = dpf.Dimensionality([1]) + + # Set the scoping ids + field_12.scoping.ids = range(num_entities_1) + + # Print the Field + print("Scalar Field : ", '\n',field_12, '\n') + + .. tab-item:: Vector fields + + Here, we create: + + - One |Field| with 2 vectors (thus, 2 entities) of 3 components each (3D vector |Field|); + - One |Field| with 2 vectors (thus, 2 entities) of 5 components each (5D vector |Field|); + + .. jupyter-execute:: + + # Define the number of entities + num_entities_2 = 2 + + You must ensure that these Fields have a *'vector'* |nature| and the corresponding |dimensionality| (*'3D'* and *'5D'*). + + For this approach, the default |nature| is *'vector'* and the default |dimensionality| is *'3D'*. So for the second vector + |Field| you must set a *'5D'* |dimensionality| using the |Field.dimensionality| method. + + Create the *'3D'* vector Field. + + .. jupyter-execute:: + + # Instantiate the Field + field_21 = dpf.Field(nentities=num_entities_2) + + # Set the scoping ids + field_21.scoping.ids = range(num_entities_2) + + # Print the Field + print("3D vector Field : ", '\n',field_21, '\n') + + Create the *'5D'* vector Field. + + .. jupyter-execute:: + + # Instantiate the Field + field_31 = dpf.Field(nentities=num_entities_2) + + # Use the Field.dimensionality method + field_31.dimensionality = dpf.Dimensionality([5]) + + # Set the scoping ids + field_31.scoping.ids = range(num_entities_2) + + # Print the Field + print("5D vector Field (5D): ", '\n',field_31, '\n') + +.. _ref_create_field_fields_factory: + +Create a |Field| using the |fields_factory| module +~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ + +.. tab-set:: + + .. tab-item:: Scalar fields + + You can use two functions from the |fields_factory| module to create a scalar |Field|: + + - The |create_scalar_field| function; + - The |field_from_array| function. + + **Create the Field using the create_scalar_field function** + + For this approach, the default |nature| of the |Field| object is *'scalar'* and the default |dimensionality| is *'1D'*. + Thus, you just have to use the |create_scalar_field| function to create a scalar |Field|. + + .. jupyter-execute:: + + # Create the scalar Field + field_13 = dpf.fields_factory.create_scalar_field(num_entities=num_entities_1) + + # Set the scoping ids + field_13.scoping.ids = range(num_entities_1) + + # Print the Field + print("Scalar Field: ", '\n',field_13, '\n') + + **Create the Field using the field_from_array function** + + Different from the other approaches, where you set or append the data after creating the |Field|, here, the data is + used as an input of the |field_from_array| function. + + This function gets an Numpy array or Python list of either: + + - 1 dimension (one array). In this case, you get directly a scalar |Field|; + - 2 dimensions (one array containing multiple arrays with 3 components each). In the is case, you get a 3D vector |Field|. + Thus, you have to change the |Field| |dimensionality| using the |Field.dimensionality| method. + + Create the scalar Field with an 1 dimensional list. + + .. jupyter-execute:: + + # Use the field_from_array function + field_14 = dpf.fields_factory.field_from_array(arr=data_1) + + # Set the scoping ids + field_14.scoping.ids = range(num_entities_1) + + # Print the Field + print("Scalar Field: ", '\n',field_14, '\n') + + Create the scalar Field with a 2 dimensional list. + + .. jupyter-execute:: + + # Use the field_from_array function + field_15 = dpf.fields_factory.field_from_array(arr=data_2) + + # Use the |Field.dimensionality| method + field_15.dimensionality = dpf.Dimensionality([1]) + + # Set the scoping ids + field_15.scoping.ids = range(num_entities_1) + + # Print the Field + print("Scalar Field (b): ", '\n',field_15, '\n') + + + .. tab-item:: Vector fields + + You can use three functions from the |fields_factory| module to create a vector |Field|: + + - The |create_vector_field| function; + - The |create_3d_vector_field| function (Specifically to create a 3D vector |Field| + (a vector |Field| with 3 components for each entity)); + - The |field_from_array| function. + + **Create the Field using the create_vector_field() function** + + For this approach, the default |nature| is *'vector'*. To define the |dimensionality| you must use the *'num_comp'* argument. + + Create the *'3D'* vector Field. + + .. jupyter-execute:: + + # Use the create_vector_field function + field_22 = dpf.fields_factory.create_vector_field(num_entities=num_entities_2, num_comp=3) + + # Set the scoping ids + field_22.scoping.ids = range(num_entities_2) + + # Print the Field + print("3D vector Field : ", '\n',field_22, '\n') + + Create the *'5D'* vector Field. + + .. jupyter-execute:: + + # Use the create_vector_field function + field_32 = dpf.fields_factory.create_vector_field(num_entities=num_entities_2, num_comp=5) + + # Set the scoping ids + field_32.scoping.ids = range(num_entities_2) + + # Print the Field + print("5D vector Field : ", '\n',field_32, '\n') + + **Create a 3d vector Field using the create_3d_vector_field() function** + + For this approach, the default |nature| is *'vector'* and the |dimensionality| is *'3D'*. Thus, you just + have to use the |create_3d_vector_field| function to create a 3D vector |Field|. + + .. jupyter-execute:: + + # Create the 3d vector Field + field_25 = dpf.fields_factory.create_3d_vector_field(num_entities=num_entities_2) + # Set the scoping ids + field_25.scoping.ids = range(num_entities_2) + + # Print the Field + print("Vector Field (3D): ", '\n',field_25, '\n') + + **Create the Field using the field_from_array() function** + + Different from the other approaches, where you set or append the data after creating the |Field|, here, the data is + used as an input of the |field_from_array| function. + + This function gets an Numpy array or Python list of either: + + - 1 dimension (one array). In this case, you have to change the |Field| |dimensionality| using the + |Field.dimensionality| method. + - 2 dimensions (one array containing multiple arrays with 3 components). In the is case, you get a 3D vector |Field|. + + .. note:: + + The |Field| must always assure a homogeneous shape. The shape is a tuple with the number of elementary data and the + number of components. + + So, for the *'5D* vector |field| we would want a shape of (10,5). Nevertheless, the 2 dimensions data vector we + defined ("data_5") has a elementary data count of 6 (2*3). Thus, we cannot define the *'5D'* vector |Field| because it would + have a (6,5) shape. + + Create the *'3D'* vector Field with an 1 dimensional list. + + .. jupyter-execute:: + + # Use the field_from_array function + field_23 = dpf.fields_factory.field_from_array(arr=data_3) + + # Use the Field.dimensionality method + field_23.dimensionality = dpf.Dimensionality([3]) + + # Set the scoping ids + field_23.scoping.ids = range(num_entities_2) + + # Print the Field + print("3D vector Field: ", '\n',field_23, '\n') + + Create the *'3D'* vector Field and give a 2 dimensional list. + + .. jupyter-execute:: + + # Use the field_from_array function + field_24 = dpf.fields_factory.field_from_array(arr=data_5) + + # Set the scoping ids + field_24.scoping.ids = range(num_entities_2) + + # Print the Field + print("3D vector Field: ", '\n',field_24, '\n') + + .. tab-item:: Matrix fields + + You can create a matrix |Field| using the |create_matrix_field| function from the |fields_factory| module. + + The default |nature| here is *'matrix'*. Thus, you only have to define the matrix |dimensionality| using the + *'num_lines'* and *'num_col'* arguments. + + Create the (2,2) matrix Field. + + .. jupyter-execute:: + + # Use the create_matrix_field function + field_41 = dpf.fields_factory.create_matrix_field(num_entities=num_entities_3, num_lines=2, num_col=2) + + # Set the scoping ids + field_41.scoping.ids = range(num_entities_3) + + # Print the Field + print("Matrix Field (2,2) : ", '\n',field_41, '\n') + + Create the (3,3) matrix Fields. + + .. jupyter-execute:: + + # Use the create_matrix_field function + field_51 = dpf.fields_factory.create_matrix_field(num_entities=num_entities_3, num_lines=3, num_col=3) + field_52 = dpf.fields_factory.create_matrix_field(num_entities=num_entities_3, num_lines=3, num_col=3) + + # Set the scoping ids + field_51.scoping.ids = range(num_entities_3) + field_52.scoping.ids = range(num_entities_3) + + # Print the Field + print("Matrix Field 1 (3,3) : ", '\n',field_51, '\n') + print("Matrix Field 2 (3,3) : ", '\n',field_52, '\n') + +Set data to the Fields +---------------------- + +To set a data array to a |Field| use the |Field.data| method. The |Field| |Scoping| defines how the data is ordered. +For example: the first id in the scoping identifies to which entity the first data entity belongs to. + +The data can be in a 1 dimension (one array) or 2 dimensions (one array containing multiple arrays) +Numpy array or Python list. When attributed to a |Field|, these data arrays are reshaped to respect +the |Field| definition. + +.. tab-set:: + + .. tab-item:: Scalar fields + + Set the data from a 1 dimensional array to the scalar Field. + + .. jupyter-execute:: + + # Set the data + field_11.data = data_1 + + # Print the Field + print("Scalar Field : ", '\n',field_11, '\n') + + # Print the Fields data + print("Data scalar Field : ", '\n',field_11.data, '\n') + + Set the data from a 2 dimensional array to the scalar Field. + + .. jupyter-execute:: + + # Set the data + field_12.data = data_2 + + # Print the Field + print("Scalar Field : ", '\n',field_12, '\n') + + # Print the Fields data + print("Data scalar Field : ", '\n',field_12.data, '\n') + + .. tab-item:: Vector fields + + Set the data from a 1 dimensional array to the *'3D'* vector Field. + + .. jupyter-execute:: + + # Set the data + field_21.data = data_3 + + # Print the Field + print("Vector Field : ", '\n',field_21, '\n') + + # Print the Fields data + print("Data vector Field: ", '\n',field_21.data, '\n') + + Set the data from a 1 dimensional array to the *'5D'* vector Field. + + .. jupyter-execute:: + + # Set the data + field_31.data = data_4 + + # Print the Field + print("Vector Field: ", '\n',field_31, '\n') + + # Print the Fields data + print("Data vector Field : ", '\n',field_31.data, '\n') + + Set the data from a 2 dimensional array to the *'3D'* vector Field. + + .. jupyter-execute:: + + # Set the data + field_22.data = data_5 + + # Print the Field + print("Vector Field: ", '\n',field_22, '\n') + + # Print the Fields data + print("Data vector Field: ", '\n',field_22.data, '\n') + + .. tab-item:: Matrix fields + + Set the data from a 1 dimensional array to the (2,2) matrix Field. + + .. jupyter-execute:: + + # Set the data + field_41.data = data_6 + + # Print the Field + print("Matrix Field: ", '\n',field_41, '\n') + + # Print the Fields data + print("Data matrix Field: ", '\n',field_41.data, '\n') + + Set the data from a 1 dimensional array to the (3,3) matrix Field. + + .. jupyter-execute:: + + # Set the data + field_51.data = data_7 + + # Print the Field + print("Matrix Field: ", '\n',field_51, '\n') + + # Print the Fields data + print("Data matrix Field: ", '\n',field_51.data, '\n') + + Set the data from a 2 dimensional array to the (3,3) matrix Field. + + .. jupyter-execute:: + + # Set the data + field_52.data = data_8 + + # Print the Field + print("Matrix Field: ", '\n',field_51, '\n') + + # Print the Fields data + print("Data matrix Field: ", '\n',field_51.data, '\n') + +Append data to the Fields +------------------------- + +You can append a data array to a |Field|, this means adding a new entity with the new data in the |Field|. You have to +give the |Scoping| id that this entities will have. + +.. tab-set:: + + .. tab-item:: Scalar fields + + Append data to a scalar |Field|. + + .. jupyter-execute:: + + # Append the data + field_11.append(scopingid=6, data=data_9) + + # Print the Field + print("Scalar Field : ", '\n',field_11, '\n') + + # Print the Fields data + print("Data scalar Field: ", '\n',field_11.data, '\n') + + .. tab-item:: Vector fields + + Append data to a vector |Field|. + + .. jupyter-execute:: + + # Append the data + field_21.append(scopingid=2, data=data_10) + + # Print the Field + print("Vector Field : ", '\n',field_21, '\n') + + # Print the Fields data + print("Data vector Field: ", '\n',field_21.data, '\n') + + + .. tab-item:: Matrix fields + + Append data to a matrix |Field|. + + .. jupyter-execute:: + + # Append the data + field_51.append(scopingid=1, data=data_11) + + # Print the Field + print("VMatrix Field : ", '\n',field_51, '\n') + + # Print the Fields data + print("Data Matrix Field: ", '\n',field_51.data, '\n') + +Create a |FieldsContainer| +-------------------------- + +A |FieldsContainer| is a collection of |Field| ordered by labels. Each |Field| of the |FieldsContainer| has +an ID for each label. These ids allow splitting the fields on any criteria. + +The most common |FieldsContainer| have the label *'time'* with ids corresponding to time sets. The label *'complex'*, +which is used in a harmonic analysis for example, allows real parts (id=0) to be separated from imaginary parts (id=1). + +For more information on DPF data structures, see the :ref:`ref_tutorials_data_structures` tutorials section. + +You can create a |FieldsContainer| by: + +- :ref:`Instantiating the FieldsContainer object`; +- :ref:`Using the fields_container_factory module`. + +.. _ref_fields_container_instance: + +Create a |FieldsContainer| by an instance of this object +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +After defining a |FieldsContainer| by an instance of this object you need to set the labels. Here, we define +Fields over time steps labels. So, when you add a |Field| to the |FieldsContainer| you must specify the time step id +it belongs to. + +.. jupyter-execute:: + + # Create the FieldsContainer object + fc_1 = dpf.FieldsContainer() + + # Define the labels + fc_1.add_label(label="time") + + # Add the Fields + fc_1.add_field(label_space={"time": 0}, field=field_21) + fc_1.add_field(label_space={"time": 1}, field=field_31) + + # Print the FieldsContainer + print(fc_1) + +.. _ref_fields_container_factory_module: + +Create a |FieldsContainer| with the |fields_container_factory| module +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The |fields_container_factory| module contains functions that create a |FieldsContainer| with predefined +labels. Here, we use the |over_time_freq_fields_container| function that create a |FieldsContainer| with a *'time'* +label. + +.. jupyter-execute:: + + # Create the FieldsContainer + fc_2 = dpf.fields_container_factory.over_time_freq_fields_container(fields=[field_21, field_31]) + + # Print the FieldsContainer + print(fc_2) \ No newline at end of file diff --git a/doc/source/user_guide/tutorials/import_data/narrow_down_data.rst b/doc/source/user_guide/tutorials/import_data/narrow_down_data.rst new file mode 100644 index 0000000000..4e2c668037 --- /dev/null +++ b/doc/source/user_guide/tutorials/import_data/narrow_down_data.rst @@ -0,0 +1,457 @@ +.. _reft_tutorials_narrow_down_data: + +================ +Narrow down data +================ + +:bdg-mapdl:`MAPDL` :bdg-lsdyna:`LS-DYNA` :bdg-fluent:`FLUENT` :bdg-cfx:`CFX` + +.. include:: ../../../links_and_refs.rst +.. |location| replace:: :class:`location` +.. |time_freq_scoping_factory| replace:: :mod:`time_freq_scoping_factory` +.. |mesh_scoping_factory| replace:: :mod:`mesh_scoping_factory` +.. |displacement| replace:: :class:`result.displacement ` +.. |Model.results| replace:: :func:`Model.results ` +.. |result op| replace:: :mod:`result` +.. |rescope| replace:: :class:`rescope ` +.. |from_mesh| replace:: :class:`from_mesh ` +.. |extract_scoping| replace:: :class:`extract_scoping ` +.. |scoping_by_sets| replace:: :func:`scoping_by_sets() ` +.. |nodal_scoping| replace:: :func:`nodal_scoping() ` +.. |ScopingsContainer| replace:: :class:`ScopingsContainer ` +.. |MeshedRegion.elements| replace:: :func:`MeshedRegion.elements` +.. |MeshedRegion.nodes| replace:: :func:`MeshedRegion.nodes` +.. |Elements.scoping| replace:: :func:`Elements.scoping` +.. |Nodes.scoping| replace:: :func:`Nodes.scoping` +.. |Field.scoping| replace:: :func:`Field.scoping` +.. |Model.metadata| replace:: :func:`Model.metadata` +.. |Metadata| replace:: :class:`Metadata ` +.. |Metadata.time_freq_support| replace:: :func:`Metadata.time_freq_support` +.. |FieldsContainer.time_freq_support| replace:: :func:`FieldsContainer.time_freq_support` +.. |Field.time_freq_support| replace:: :func:`Field.time_freq_support` +.. |TimeFreqSupport.time_frequencies| replace:: :func:`TimeFreqSupport.time_frequencies` + +This tutorial explains how to scope your results over time and mesh domains. + +:jupyter-download-script:`Download tutorial as Python script` +:jupyter-download-notebook:`Download tutorial as Jupyter notebook` + +Understanding the scope +----------------------- + +To begin the workflow set up, you need to establish the ``scoping``, that is +a spatial and/or temporal subset of the simulation data. + +The data in DPF is represented by a |Field|. Thus, narrow down your results means scoping your |Field|. +To do so in DPF, you use the |Scoping| object. You can retrieve all the time steps available for +a result, but you can also filter them. + +.. note:: + + Scoping is important because when DPF-Core returns the |Field| object, what Python actually has + is a client-side representation of the |Field|, not the entirety of the |Field| itself. This means + that all the data of the field is stored within the DPF service. This is important + because when building your workflows, the most efficient way of interacting with result data + is to minimize the exchange of data between Python and DPF, either by using operators + or by accessing exclusively the data that is needed. For more information on the DPF data storage + structures see :ref:`ref_tutorials_data_structures`. + +In conclusion, the essence of a scoping is to specify a set of time or mesh entities by defining a range of IDs: + +.. image:: ../../../images/drawings/scoping-eg.png + :align: center + +Create a |Scoping| object from scratch +-------------------------------------- + +First, import the necessary PyDPF-Core modules. + +.. jupyter-execute:: + + # Import the ``ansys.dpf.core`` module + from ansys.dpf import core as dpf + +Then, use the available APIs to create a |Scoping| object. It can be created by: + +- :ref:`Instantiating the Scoping class`; +- :ref:`Using the scoping factory `. + + +.. _ref_create_scoping_instance_object: + +Instanciate a |Scoping| +^^^^^^^^^^^^^^^^^^^^^^^ + +Create a time and a mesh |Scoping| by instantiating the |Scoping| object. Use the *'ids'* and *'location'* arguments +and give the entities ids and |location| of interest. + +.. tab-set:: + + .. tab-item:: Time scoping + + A time location in DPF is a |TimeFreqSupport| object. Thus, we chose a *'time_freq'* |location| and target + a set of time by their ids. + + .. jupyter-execute:: + + + # Define a time list that targets the times ids 14, 15, 16, 17 + time_list_1 = [14, 15, 16, 17] + + # Create the time Scoping object + time_scoping_1 = dpf.Scoping(ids=time_list_1, location=dpf.locations.time_freq) + + + .. tab-item:: Mesh scoping + + Here, we chose a nodal |location| and target a set of nodes by their ids. + + .. jupyter-execute:: + + # Define a nodes list that targets the nodes with the ids 103, 204, 334, 1802 + nodes_ids_1 = [103, 204, 334, 1802] + + # Create the mesh Scoping object + mesh_scoping_1 = dpf.Scoping(ids=nodes_ids_1, location=dpf.locations.nodal) + +.. _ref_create_scoping_scoping_factory: + +Use the scoping factory module +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Create a |Scoping| object by using the |time_freq_scoping_factory| module for a temporal |Scoping| +and the |mesh_scoping_factory| module for a spatial |Scoping|. + +.. tab-set:: + + .. tab-item:: Time scoping + + Here, we use the |scoping_by_sets| function so we can have different time steps in the |Scoping|. This function + gives a |Scoping| on a *'time_freq'* |location|. + + .. jupyter-execute:: + + # Define a time list that targets the times ids 14, 15, 16, 17 + time_list_2 = [14, 15, 16, 17] + + # Create the time Scoping object + time_scoping_2 = dpf.time_freq_scoping_factory.scoping_by_sets(cumulative_sets=time_list_2) + + + .. tab-item:: Mesh scoping + + Here, we use the |nodal_scoping| function so we have a mesh |Scoping| in a nodal |location|. + + .. jupyter-execute:: + + # Define a nodes list that targets the nodes with the ids 103, 204, 334, 1802 + nodes_ids_2 = [103, 204, 334, 1802] + + # Create the mesh Scoping object + mesh_scoping_2 = dpf.mesh_scoping_factory.nodal_scoping(node_ids=nodes_ids_2) + +Extract a |Scoping| +------------------- + +You can extract |Scoping| from some DPF objects. They are: + +.. tab-set:: + + .. tab-item:: Time scoping + + - A |Model|; + - A |FieldsContainer| ; + - A |Field|. + + + .. tab-item:: Mesh scoping + + - A |MeshedRegion|; + - A |FieldsContainer| ; + - A |Field|. + +Define the objects +^^^^^^^^^^^^^^^^^^ + +First, import a result file and create a |Model|. For this tutorial, you can use one available in the |Examples| module. +For more information about how to import your own result file in DPF, see the :ref:`ref_tutorials_import_result_file` +tutorial. + +.. jupyter-execute:: + + # Import the examples module + from ansys.dpf.core import examples + # Import the operators module + from ansys.dpf.core import operators as ops + + # Define the result file path + result_file_path_1 = examples.download_transient_result() + # Create the DataSources object + ds_1 = dpf.DataSources(result_path=result_file_path_1) + # Create the model + model_1 = dpf.Model(data_sources=ds_1) + +From this result file we extract: + +- The mesh (in DPF a mesh is the |MeshedRegion| object); +- The displacement results. The displacement |Result| object gives a |FieldsContainer| when evaluated. Additionally, + we can get a |Field| from this |FieldsContainer|. + +.. jupyter-execute:: + + # Get the MeshedRegion + meshed_region_1 = model_1.metadata.meshed_region + + # Get a FieldsContainer with the displacement results + disp_fc = model_1.results.displacement.on_all_time_freqs.eval() + + # Get a Field from the FieldsContainer + disp_field = disp_fc[0] + +Extract the time |Scoping| +^^^^^^^^^^^^^^^^^^^^^^^^^^ + +Extract the time |Scoping| is extracting the scoping of the time frequencies from the |TimeFreqSupport| +of the DPF object. + +.. tab-set:: + + .. tab-item:: From the Model + + You can extract the |TimeFreqSupport| available for the results by accessing the |Model| |Metadata|. + Thus, you must use the |Model.metadata| method. From the |Metadata|, you can get the |TimeFreqSupport| + by using the |Metadata.time_freq_support| method. + + .. jupyter-execute:: + + # Extract the TimeFreq support + tfs_1 = model_1.metadata.time_freq_support + + To extract the time frequencies you use the |TimeFreqSupport.time_frequencies| method. The time + frequencies are given in a Field. Thus, to get the time |Scoping| you need to use the |Field.scoping| method. + For this approach, the time frequencies are given in a *'TimeFreq_sets'* location. + + .. jupyter-execute:: + + # Extract the time frequencies + t_freqs_1 = tfs_1.time_frequencies + + # Extract the time scoping + time_scop_1 = t_freqs_1.scoping + + #Print the time scoping + print(time_scop_1) + + + .. tab-item:: From the FieldsContainer + + You can extract the |TimeFreqSupport| of each |Field| in the |FieldsContainer| by using the + |FieldsContainer.time_freq_support| method. + + .. jupyter-execute:: + + # Extract the TimeFreq support + tfs_2 = disp_fc.time_freq_support + + To extract the time frequencies you use the |TimeFreqSupport.time_frequencies| method. The time + frequencies are given in a Field. Thus, to get the time |Scoping| you need to use the |Field.scoping| method. + + .. jupyter-execute:: + + # Extract the time frequencies + t_freqs_2 = tfs_2.time_frequencies + + # Extract the time scoping + time_scop_2 = t_freqs_2.scoping + + #Print the time scoping + print(time_scop_2) + + + .. tab-item:: From the Field + + You can extract the |TimeFreqSupport| of a |Field| by using the |Field.time_freq_support| method. + + .. jupyter-execute:: + + # Extract the TimeFreq support + tfs_3 = disp_field.time_freq_support + + To extract the time frequencies you use the |TimeFreqSupport.time_frequencies| method. The time + frequencies are given in a Field. Thus, to get the time |Scoping| you need to use the |Field.scoping| method. + + .. jupyter-execute:: + + # Extract the time frequencies + t_freqs_3 = tfs_1.time_frequencies + + # Extract the time scoping + time_scop_3 = t_freqs_3.scoping + + #Print the time scoping + print(time_scop_3) + +Extract the mesh |Scoping| +^^^^^^^^^^^^^^^^^^^^^^^^^^ + +.. tab-set:: + + .. tab-item:: From the MeshedRegion + + You can extract the mesh |Scoping| from a |MeshedRegion| using: + + - The |from_mesh| operator; + - The |Elements| object; + - The |Nodes| object. + + **Use the from_mesh operator** + + Extract the mesh |Scoping| from the |MeshedRegion| using the |from_mesh| operator. It gets the + |Scoping| for the entire mesh with a *'nodal'* location. You can also get an *'elemental'* location + by using the *'requested_location'* argument. + + .. jupyter-execute:: + + # Extract the mesh scoping + mesh_scoping_3 = ops.scoping.from_mesh(mesh=meshed_region_1).eval() + + # Print the mesh Scoping + print("Scoping from mesh", "\n", mesh_scoping_3, "\n") + + **Use the Elements object** + + You can obtain the |Elements| object from a given |MeshedRegion| by using the |MeshedRegion.elements| + method. You can extract the mesh |Scoping| from the |Elements| object by using the |Elements.scoping| method. + It gets the |Scoping| for the entire mesh with a *'elemental'* location. + + .. jupyter-execute:: + + # Extract the mesh scoping + mesh_scoping_4 = meshed_region_1.elements.scoping + + # Print the mesh Scoping + print("Scoping from mesh", "\n", mesh_scoping_4, "\n") + + **Use the Nodes object** + + You can obtain the |Nodes| object from a given |MeshedRegion| by using the |MeshedRegion.nodes| + method. You can extract the mesh |Scoping| from the |Nodes| object by using the |Nodes.scoping| method. + It gets the |Scoping| for the entire mesh with a *'nodal'* location. + + .. jupyter-execute:: + + # Extract the mesh scoping + mesh_scoping_5 = meshed_region_1.nodes.scoping + + # Print the mesh Scoping + print("Scoping from mesh", "\n", mesh_scoping_5, "\n") + + + .. tab-item:: From the FieldsContainer + + Extract the mesh Scoping from the |FieldsContainer| using the |extract_scoping| operator. This operator gets the mesh + |Scoping| for each |Field| in the |FieldsContainer|. Thus, you must specify the output as a |ScopingsContainer|. + + .. jupyter-execute:: + + # Define the extract_scoping operator + extract_scop_fc_op = ops.utility.extract_scoping(field_or_fields_container=disp_fc) + + # Get the mesh Scopings from the operators output + mesh_scoping_6 = extract_scop_fc_op.outputs.mesh_scoping_as_scopings_container() + + # Print the mesh Scopings + print("Scoping from FieldsContainer", "\n", mesh_scoping_6, "\n") + + .. tab-item:: From the Field + + You can extract the mesh |Scoping| from a |Field| using: + + - The |extract_scoping| operator; + - The |Field.scoping| method. + + **Use the extract_scoping operator** + + This operator gets the mesh |Scoping| from the result |Field|. This means it gets the |Scoping| + where the result is defined at. + + .. jupyter-execute:: + + # Extract the mesh scoping + mesh_scoping_7 = ops.utility.extract_scoping(field_or_fields_container=disp_field).eval() + + # Print the mesh Scoping + print("Scoping from Field ", "\n", mesh_scoping_7, "\n") + + **Use the Field.scoping method** + + This method gets the mesh |Scoping| from the result |Field|. This means it gets the |Scoping| + where the result is defined at. + + .. jupyter-execute:: + + # Extract the mesh scoping + mesh_scoping_8 = disp_field + + # Print the mesh Scoping + print("Scoping from Field", "\n", mesh_scoping_8, "\n") + +Use a |Scoping| +--------------- + +The |Scoping| object can be used : + +- :ref:`When extracting a result`; +- :ref:`After extracting a result`. + +.. _ref_use_scoping_when_extracting: + +Extract and scope the results +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +You can extract and scope a result using the |Model.results| method or the |result op| operator inputs. +Those two approaches handle |Result| objects. Thus, to scope the results when extracting them you use +the *'time_scoping'* and *'mesh_scoping'* arguments and give the Scopings of interest. + +Here, we extract and scope the displacement results. + +.. jupyter-execute:: + + # Extract and scope the result using the Model.results method + disp_model = model_1.results.displacement(time_scoping=time_scoping_1, mesh_scoping=mesh_scoping_1).eval() + + # Extract and scope the results using the result.displacement operator + disp_op = ops.result.displacement(data_sources=ds_1, time_scoping=time_scoping_1, mesh_scoping=mesh_scoping_1).eval() + + # Print the displacement results + print("Displacement from Model.results ", "\n", disp_model, "\n") + print("Displacement from result.displacement operator", "\n", disp_op, "\n") + +.. _ref_use_scoping_after_extracting: + +Extract and rescope the results +^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ + +The mesh |Scoping| can be changed after the result extraction or manipulation by using the +|rescope| operator. It takes a |Field| or |FieldsContainer| that contains the results data +and rescope them. + +Here, we rescope the displacement results. + +.. jupyter-execute:: + + # Extract the results for the entire mesh + disp_all_mesh = model_1.results.displacement.eval() + + # Rescope the displacement results to get the data only for a specific set of nodes + disp_rescope = ops.scoping.rescope(fields=disp_all_mesh, mesh_scoping=mesh_scoping_1).eval() + + # Print the displacement results for the entire mesh + print("Displacement results for the entire mesh", "\n", disp_all_mesh, "\n") + + # Print the displacement results for the specific set of nodes + print("Displacement results rescoped ", "\n", disp_rescope, "\n") + + + +