Skip to main content

Spark’s Test Case generation feature allows you to test services at scale, saving hours of manual effort. Whether it’s tens or thousands of scenarios you need to test, Spark can generate them using random data for your services' inputs, within the bounds you specify. With this release, you can limit your test to specified subservices as well, so you can focus on the things that matter.

 

How to generate random test cases in Spark

  • Navigate to Testing Center in Spark and click New Testbed.

  • From the modal window, click Open test case generator.

  • Complete the fields in the modal window:

    • Testbed name

    • Version

    • Subservices In the dropdown, deselected any sub-services that are not required for the test. Only sub-services which are available in the selected version of the service will be listed here.

    • Number of test cases

    • Testbed description (optional)

  • Click Next and enter your desired boundary limits for your selected fields. Inputs for deselected services will not display here.

  • Click Generate test cases to finish.

Additionally, the Run testbed and Generated test cases summary modals now specify the subservices which were included.

 

New Services section in Service Management

Services is a new section under Service Management giving an overview of all the services in a tenant the user can access. Users can 1) search and filter the list based on a variety of criteria (including name, folder, tags, and dates), 2) save filters for later use, 3) view and manage saved filters, and 4) perform actions on services. Actions include: Go to Service, Download Service, Recompile Neuron, Go to API Tester, and Go to API Call History.

Please note: the tenant configuration option EnableTags must be set to TRUE for tag-related features to be displayed and accessible. If tags are enabled and a Saved Filter includes tags as part of its parameter set, then if tags are disabled for any reason, users of the filter will see them greyed out and a tooltip will explain that tags are not enabled. Users may still remove any previously set tags from the filter.

 

New Billing and usage option in User menu

Users included in the User: Billing and payment user group may now access our billing portal (provided by Amberflo) via the Billing and usage link displayed in the User menu. This link will only be displayed (and access will only be granted) to users in that user group.

 

Xcall

 

Request Meta support in Xcall

Spark’s Xcall function allows users to consume APIs from within their Excel models. Values from the model may be passed as input parameters and values from the API response may then be consumed by other calculations within the model. Along with these request parameters, users may now also provide additional information about the call such as its purpose and the model the call is being made from. This information will be searchable via the API Call History log and is useful for auditing and tracking service usage. This additional info is referred to as “request meta”, since it is not directly involved in the generation of the calculation result or response but provides general information about the call.

This release introduces the following request meta parameters:

  • compiler_type

  • source_system

  • call_purpose

Users may specify these parameters as part of the input_template or input_values ranges – simply prefix these labels with a ‘hash’ or ‘pound’ symbol (#) so Spark knows to treat them as request meta parameters instead of standard parameters. Please note, this feature is only available when using Xcall via the Spark Assistant.

Example call

=C.SPARK_XCALL("SERVICE_URL", INPUT_TEMPLATE, OUTPUT_TEMPLATE, "#call_purpose","Inline call",'#source_system',"Inline source")

If no value is provided for a particular request parameter, Spark will revert to the default values: compiler_type : Type3, source_system : Spark Assistant xCall, call_purpose : xCall.

 

Public API support in Xcall

Xcall now supports public APIs on the coherent.global domain. Simply specify the full endpoint URL as the first parameter of Xcall and proceed as normal.

Example for a standard/private API

C.Spark_Xcall("XCallV3/service8","input_location")

Example for a public API

C.Spark_Xcall("https://excel.dev.coherent.global/coherent/api/v3/public/xcall/execute/batch/{service_uri}")

 

New Recalculate SPARK_XCALL function button in Spark Assistant

When using Xcall, it’s always best to refresh the functions when re-opening or returning to an Excel file after a long period of time. Even if the inputs are the same, it’s possible there may be a new version of the service you’re interacting with. Spark Assistant will refresh all Xcall functions in an Excel file automatically every time you log in, but this release introduces a manual method via a new button in the Build menu. Clicking Recalculate SPARK_XCALL function will re-run all of the Xcall functions simultaneously.

 

Import/Export Tool - Define Destination Folder

Once a service has passed testing in UAT, the next step is usually to push it to the production environment. Re-uploading the model to production manually is not recommended since there is a risk of uploading a difference file or previous version by mistake, thereby potentially introducing untested changes to production. Coherent’s Import/Export (“ImpEx”) Tool enables customers to promote files from test environments through to production safely and enforces good deployment practices.

Some customers have a setup wherein folder names (and sometimes even Service names) differ between Production and UAT Spark environments. The Coherent ImpEx Tool now supports such scenarios. Customers may now export a particular service from one folder in UAT and then upload it into a different folder within their production environment. Previously, Spark determined the destination of an upload via the name of the exported folder. Now, within the ImpEx manifest file, users may define an alternative destination for the output. If the specified alternative destination exists, the upload will be treated as an update to the existing service and the version will be incremented accordingly.

 

Validation API now includes default values

For new services, the Validation API now includes default values in its output, enabling consumers of the API to more quickly understand what these are without having to look elsewhere.

When the Validation API request_meta 's validation_type is equal to default_values, the resulting output will include a default_value field containing the resolved value at the time of model upload. By contrast, when validation_type = dynamic, default values will not be included, since they don’t apply. Please note, this feature only applies to new services; outputs for existing services will not include default values.

Example output containing default_value

{
"status": "Success",
"response_data": {
"outputs": {
"01_letter": {
"validation_allow": "List",
"validation_type": "static",
"dependent_inputs": ,
"02_number"
],
"default_value": "a",
"min": null,
"max": null,
"options": a
"a",
"b"
],
"ignore_blank": true
},
...

 

Date and Time Format Updates

Dates & times are now displayed in a standardized format across the Spark platform. Enhanced UI components help users to select a date and time with separate fields for year, month, date, hour, minute, and second. All times will use the 24-hour format – AM/PM options have been removed. The resulting value will be stored in accordance with the international datetime standard: YYYY-MM-DD HH:MM:SS, for example, 2023-05-29 09:30:00.

Outside of API tester, dates and times will be separated by a comma wherever they are displayed, for example, 2023-05-29, 09:30:00.

 

Improvements to Service Documentation Properties: File UUID, File Hash, OriginalServiceHash, and CompiledOutputHash

We’re improving traceability across environments and between deployments by implementing consistent hashes and identifiers for compiled Spark and Neuron assets.

Service UUID (Universally Unique IDentifier) and Version UUID serve as unique references inside a single tenant or instance. Since metadata information about deployment is also considered when assigning these UUIDs, however, new IDs may be generated when deploying to an on-premises hybrid runner. To help you understand the provenance of a given service (e.g. "What is this thing that was built, tested, and approved in UAT but then rolled out to production globally and to five regional hybrid runners?") we also have a File UUID, which is embedded in the package at compile time and logged. By having an identifier for a file that persists and does not change across environments or deployments, customers can truly follow the journey of any asset through each step of the process. Along with these generated IDs, we also store “hashes” of the uploaded file, the original service, and the compiled Neuron Wasm. These hashes are like fingerprints for files and allow us to verify that files are the same as what was originally uploaded or that distributed code is the same as what was originally compiled. Please note, additional UUIDs and hashes will not be appended to existing services ex post facto, so these improvements only apply to new services.

 

Compiler version is now visible in Service Documentation > Version Details

Among the previously mentioned new service documentation properties is also the Neuron Compiler version. As with the above changes, this applies only to new services, and will not be displayed for existing services or where Neuron is not configured.

 

Improvements to Add New Version screen

When adding a new version of a service, the upload summary modal window now shows the differences between the model being uploaded and the current published version of the service. This information can help in determining how to populate the Upgrade type, Version label, and Release Notes fields. When selecting an upgrade type, Spark will display a semantic version preview based on the type selected. If any warnings are generated during the conversion process, these can be viewed by clicking or tapping the “See warning details” button.

Once the file has been converted and compilation is complete, a success message will be displayed, together with the estimate of the effort and cost savings vs. traditional development.

 

Additional confirmation step when deleting user groups

To limit the possibility of accidental deletion of user groups, we’ve introduced an additional step to the process. Users will now be required to type the word DELETE into a confirmation modal window. Additional messaging has also been added to emphasize the impact of deleting user groups and subsequent recovery challenges.

 

Download service file naming convention

We’ve simplified the naming convention used by Spark when generating service files for download. Downloaded services will now be named according to the following convention:

  • Original Service: <original file name> Example: If the original file name is Add.xlsx, then the downloaded service file name will be Add.xlsx

  • Configured Service: <service name >Version] (Configured).extension> Example: If the original file name is ‘Sum_service’, then the downloaded service file name will be Sum_service e0.1.0] (Configured).xlsx

  • WebAssembly module: <service name >Version] (Wasm).zip> Example: If the original file name is 'Sum_service', then the downloaded service file name will be Sum_service e0.1.0] (Wasm).zip

As before, users may still rename the files as desired.

 

API Tester Documentation Template Changes - More Sample rows and Add Request Headers

Service documentation has been improved to include three rows of sample data for Inputs and Outputs (up from one) and to include a sample Request Header. Larger samples allow for more varied examples.

Be the first to reply!