Skip to main content

Product Updates

See what’s new at our product, check the updates below

Xcall now supports array inputs

Xcall now supports array inputsPreviously, Xcall users could only make one API call per user-defined function, and each input and output needed to be defined individually. This made Xcalls for models with multiple inputs and outputs somewhat complicated to set up and especially challenging to debug. Now that Xcall works as an array function, users can select multiple input names and values at the same time, and Xcall will match the inputs and the values based on their position. This means you can make multiple Xcalls from a single UDF! You can also specify your output template as an array, drastically simplifying your Xcall UDFs. For example, let’s say you have a model with 13 inputs and 7 outputs. Previously, running 10 scenarios through this model would have required setting up 28+ parameters for each output and resulted in 70 UDFs. Using arrays, the same result can be achieved using a single UDF and only 4 parameters. Execute API v4This month, we’re excited to roll out a new version of our Execute API! Version 4 can ingest multiple inputs simultaneously and provide all of the results in one response. This significantly reduces processing time compared to v3. This version is ideal for handling up to several thousand inputs, however for truly massive amounts of data, we recommend using our new Batch API. Please see below for an explanation of the differences between the two APIs. Introducing Batch!Our new Batch API allows you to process large numbers of inputs in parallel, thus speeding up processing time significantly. With Batch you can submit huge numbers of inputs (without running into rate limits) and add additional inputs (in chunks of varying sizes) before closing the batch and getting your results. You can also choose to get results as soon as they’re ready if you prefer. Depending on the size of the data payload, Batch uses cloud technology to scale horizontally, adding additional computing power to handle the volume.To understand the differences between v3 and v4 Execute API and Batch, imagine submitting API requests to Spark like sending vehicles through a toll booth. Using Execute v3, each input is one passenger, each is driving their own car and must pay their own toll. Everyone gets through, but it can take quite a while if there’s a lot of traffic. Using Execute v4, each input is still a passenger, but now they’re all on a bus together and the bus driver pays the toll collectively. As a result, everyone gets through much faster than if they were in their own cars, but only one bus gets through at a time. Batch, on the other hand, is like driving a caravan through a toll plaza. The first vehicle calls ahead to let the toll plaza know they’re coming and the plaza agrees to open up as many toll booths as possible, letting all of the busses, cars, and trucks through simultaneously. Once the last vehicle has passed through, the additional lanes are all closed down. Python SDKOur new Python SDK gives developers everything they need to integrate our v3 and v4 Execute APIs and our Batch API into their code. The Python SDK makes using Batch a breeze. After importing the package and configuring a few variables, you can create a batch and add data to it with just two lines of code. The SDK also includes tools to receive all results and close the batch, receive partial results as they become available, add additional inputs, get information about or check the status of a batch, and close or cancel a batch. Contact your Customer Success representative to request access to the Python SDK.# Example code snippet:# Create a Batch from Spark and add data to the batch# If successful, batch_execute will return a Batch instancebatch = sdk.Service.batch_execute(input=list[dict] | pd.DataFrame , meta=object)# Get all the results from the batchresults = batch.get_all_results()print(results) Other changes Following last month’s new feature allowing admins to set an expiration date for user access, we’ve also improved some of the behaviors surrounding this. Now, as soon as a user is given tenant administrator rights, any expiration date set for them is automatically removed. Because a tenant administrator's access can never expire, the option to set this up also disappears from the Edit User screen. Small inconsistencies in metadata have been removed between the Testing, Integration, and Documentation tabs in the API Tester. For enhanced security and to enforce best practices, supervisor:pf will now no longer appear for selection in the “User Groups” section when setting up API Key Groups. Instead of creating one key with universal rights, tenant admins should create new API keys for specific purposes, following the principle of least privilege. This change does not affect existing API key groups.

Related products:Spark

Custom Branding

Custom BrandingSpark now supports custom branding! Tenants may be configured to display a different logo in place of the Coherent logo. With your own logo in place, you can present Spark as an integral part of your technology solution and give your employees and customers confidence that they are using an internally approved and vetted application. For now, Coherent must enable custom branding for you via tenant configuration; soon, however, tenant admins will be able to do so on their own. Once enabled, tenant admins simply upload the logo and it will be displayed the next time they log in. Webhook ConfigurationIn March we introduced the ability to create webhooks in Spark, now we’ve added the ability to configure and enable webhooks via the new Webhook configuration section of Tenant configuration.Webhooks allow you to export event data (which is typically seen in the Recent activity panel within the Spark UI) and process or log it within your own systems. You can then set up workflows based on these events, for example sending requests to Spark APIs or to external systems. One example might be to deactivate a service if requests exceed a certain threshold or if the service is executed by users other than those in a pre-defined list. How to set up a webhook:As a Tenant Admin, open the User menu and click or tap Options. When the Tenant configuration page opens, navigate to the Webhook configuration tab. Check the Enable webhooks option. Click the New webhook button. Complete the fields in the modal window: Add a descriptive Name Specify the Endpoint URL to be called when an event occurs. The event details will be passed to this endpoint in the request payload. Add Request headers as required. Add Query string parameters as required. These static parameters and values will be appended to the Endpoint URL before it's called. Enter a Description, if required. Click Add to save your changes. The webhook will now be triggered whenever any event occurs. NeuronPerformance improvementsIn our latest release, we're excited to announce a significant performance improvement for our SaaS platform users. By leveraging the power of Neuron technology, we've optimized the execution of our APIs, resulting in a speed increase of up to 30%. This enhancement will enable more efficient and seamless interactions with the platform, improving the overall user experience and accelerating the completion of critical tasks. Stay ahead with our faster and more reliable SaaS platform, and continue to enjoy the benefits of our ongoing commitment to innovation and excellence. Neuron version targeting improvementsWhenever a new version of a service is created or recompiled in Neuron, then when the default compiler version for updates is set to MaintainVersion, Spark will first verify that the resolved version of Neuron exists. If it does not, Spark will instead use StableLatest for recompilation.Please note: If the tenant default compiler for new services is set to Release Candidate, it is best to have the same configuration for service version updates as well. Recompile process improvementsNow when recompiling a service with a different version of Neuron, Spark shows a status bar and provides access to the full upload log. Security Access expirationSpark now provides the ability to define deactivation dates for users. In the user creation and user editing screen, tenant admins will find a new option field called, Access expiration. The date and time defined here will determine the moment when the user will lose access to Spark.When the expiration date has passed, the expired user will be sent to a redirect page when they try to log in. If they try to access any Spark APIs with their bearer token, it will return a 401: NO AUTHORIZATION error.Tenant admins can now be confident that users will only have access for as long as they need it. Other security improvements The way Spark handles Deactivated users has also changed. They will no longer simply be disabled but instead, the date of their deactivation will be added as a custom attribute account_end_date. When users attempt to log in, this parameter will be checked, and if the user has the account_end_date claim in their token and the defined date has passed, a 401: NO AUTHORIZATION message will be returned. Spark now includes a nonce parameter in authentication requests to prevent ID token replay attacks and enhance the security of Spark's authorization flow. Spark now validates audience claims when a client accesses an API using an access or ID token. Access will not be granted to any client using an ID token.  Spark FormsPreviously, users who edited/customized their Spark Forms found that changes to the underlying model were not being incorporated into the edited form. They then needed to manually synchronize the edited form with the updated service inputs. The only viable option had been to delete the service and upload it afresh, then redo their previous edits and customizations. With this release, Spark will intelligently merge changes from newly uploaded service versions into the customized “FormSpec”.If a new service version includes new inputs or outputs, these will be inserted into the customized FormSpec within a subsection labeled, “New control”. For existing controls, properties such as control type and metadata will be updated as necessary. If a new service version removes inputs/outputs, their corresponding control definitions in FormSpec will also be removed. If that results in an empty section or sub-section, this will also be removed. Execute API enhancements Direct addressing to submit and request outputs directly from the file - see: request_dataunder "Direct cell reference". Tenant administrators can control access to directly reference outputs using the Direct addressing outputs enabled toggle in Service Documentation > Service Details. We have added the ability to download a copy of the original Excel file with the API response - see: request_metaunder excel_file.  Validation APIAs requested by several customers, the Validation API now includes the following additional information: input_message_title input_message error_style error_title error_message  API Call HistoryA JSON-formatted API Call History log provides more flexibility and options for integration with other systems and allows consumers to filter and include only the relevant information more easily. In addition to the existing Excel and CSV format options, Spark now offers JSON format download via the UI and a new API. Within Spark, navigate to the API call history page for your service, click Download all API calls, then click Download in JSON format to download the .zip archive. See the User Guide for more details. Xcall C.SPARK_XCALL() UDF improvementsPreviously, if the Xinput and Xoutput tables had a maximum of 100 rows and 100 columns, but the file uploaded by a user only contained data in 80 rows, C.SPARK_XCALL() UDF (User Defined Function) would generate only 80 rows. With this release, Xcall calls the Service Info API first and stores the response in its memory. This definition will be used to generate the input and output instead of the default. Xcall’s memory will be refreshed whenever the user logs in or the Spark_Xcall function is recalculated. If a user already has Excel and Spark Assistant open, and a new version of the service used within an Xcall is published, Spark Assistant prompts the user to sync with the latest version by clicking Sync. Other improvements Spark truncates file names longer than 50 characters, which occasionally led to the creation of new services instead of new versions of existing services. Spark now stores the full names of uploaded files and uses these values when determining whether an upload should be treated as a new service or a new version. XReport now supports the Montserrat (Bold/Regular) and Halant (Regular) fonts. The Upload Log now displays any Spark-related messages first before any Neuron-related messages, for all categories: Info, Warning, Error, Tips. The Active Service counter displayed on the Insights page will now only be displayed if EnableActiveService flag has been set to TRUE for the Tenant. Since most Tenants are not using this feature, it proved confusing to have this displayed by default. A link to our recently launched Ignitors Community has been added to the Spark User Menu.

Related products:Spark

Administration & Service Management

 Neuron version selectionOur newest release provides users additional control over the version of Neuron used to compile a service. This is helpful to provide consistency across a suite of services, especially if you’d like to perform additional testing on the newest version of Neuron before upgrading. Currently, all new services use the newest version of Neuron by default. Now, tenant admins can choose a default behavior for both new services and new versions of existing services. For new services, they can specify one version of Neuron or choose between the most recent stable version (RecentStable) or a more “bleeding-edge” ReleaseCandidate version, which will include any additional updates outside our standard release schedule. For updates to existing services, the new default behavior is to use the same Neuron version as the original service, however, admins may also choose RecentStable or ReleaseCandidate.Along the same lines, previously, any service compiled using an older version of Neuron could be recompiled using the latest version. Now, individual users can choose to recompile a service using any version of Neuron they wish. This is helpful if you’d like to upgrade an older service, but aren’t familiar with (or your organization hasn’t officially approved) the latest version of Neuron. AnalyticsSpark Insights now provides more inclusive statistics, especially for users of our Import/Export (ImpEx) tool. Testing Explore test cases in API TesterTesting users who wish to review a particular test case from their testbed results, only used to have the option of copying and pasting the data into the API Tester to conduct their review. Spark’s Testing Center now provides a method to pre-load the API Tester with the data from any test case in your testbed. All you need is the UUID, which is included when you download the testbed results as an Excel file. You can see step-by-step directions in our User Guide. Systematic test case generationUsers tasked with testing often like to examine test cases presented in a more systematic (as opposed to randomised) way. Rather than reordering the testbed results in the downloaded Excel file, users will now see the “Systematic test case generation option” when creating a testbed. This option produces test cases in order, based on all possible permutations of the inputs specified. For complete information, please see our User Guide. XcallTo maintain consistent behavior between Excel and Spark, we have removed the ability to specify subservices in Xcall using dot (“.”) notation (for example: =C.XCALL("TestFolder/BasicService.Subservice1"). Users of Xcall should now specify subservices via the #service_category request_meta metadata field, as demonstrated in the examples below.Please be aware: all existing Xcalls with subservices will need to be changed to the new format.1. Pass a single subservice as an inline argument to the function call: =C.SPARK_XCALL("SERVICE_URI",D6:E13,,"#service_category","Subservice1")2. Pass multiple subservices as an inline argument to the function call:=C.SPARK_XCALL("SERVICE_URI",D6:E13,,"#service_category","Subservice1,Subservice2,Subservice3")3. Pass multiple subservices by cell reference as an inline argument to the function call:=C.SPARK_XCALL("SERVICE_URI",D6:E13,,"#service_category",C4:C6) In this example, cells C4 to C6 contain the names of the sub-services.If no subservice name is provided, the default subservice will be assumed. For complete documentation on using subservices with your Xcalls, see our documentation.

Related products:Spark

Targeted testing - generate random testbed for selected subservices

Spark’s Test Case generation feature allows you to test services at scale, saving hours of manual effort. Whether it’s tens or thousands of scenarios you need to test, Spark can generate them using random data for your services' inputs, within the bounds you specify. With this release, you can limit your test to specified subservices as well, so you can focus on the things that matter. How to generate random test cases in Spark Navigate to Testing Center in Spark and click New Testbed. From the modal window, click Open test case generator. Complete the fields in the modal window: Testbed name Version Subservices In the dropdown, deselected any sub-services that are not required for the test. Only sub-services which are available in the selected version of the service will be listed here. Number of test cases Testbed description (optional) Click Next and enter your desired boundary limits for your selected fields. Inputs for deselected services will not display here. Click Generate test cases to finish. Additionally, the Run testbed and Generated test cases summary modals now specify the subservices which were included. New Services section in Service ManagementServices is a new section under Service Management giving an overview of all the services in a tenant the user can access. Users can 1) search and filter the list based on a variety of criteria (including name, folder, tags, and dates), 2) save filters for later use, 3) view and manage saved filters, and 4) perform actions on services. Actions include: Go to Service, Download Service, Recompile Neuron, Go to API Tester, and Go to API Call History.Please note: the tenant configuration option EnableTags must be set to TRUE for tag-related features to be displayed and accessible. If tags are enabled and a Saved Filter includes tags as part of its parameter set, then if tags are disabled for any reason, users of the filter will see them greyed out and a tooltip will explain that tags are not enabled. Users may still remove any previously set tags from the filter. New Billing and usage option in User menuUsers included in the User: Billing and payment user group may now access our billing portal (provided by Amberflo) via the Billing and usage link displayed in the User menu. This link will only be displayed (and access will only be granted) to users in that user group. Xcall Request Meta support in XcallSpark’s Xcall function allows users to consume APIs from within their Excel models. Values from the model may be passed as input parameters and values from the API response may then be consumed by other calculations within the model. Along with these request parameters, users may now also provide additional information about the call such as its purpose and the model the call is being made from. This information will be searchable via the API Call History log and is useful for auditing and tracking service usage. This additional info is referred to as “request meta”, since it is not directly involved in the generation of the calculation result or response but provides general information about the call.This release introduces the following request meta parameters: compiler_type source_system call_purpose Users may specify these parameters as part of the input_template or input_values ranges – simply prefix these labels with a ‘hash’ or ‘pound’ symbol (#) so Spark knows to treat them as request meta parameters instead of standard parameters. Please note, this feature is only available when using Xcall via the Spark Assistant.Example call=C.SPARK_XCALL("SERVICE_URL", INPUT_TEMPLATE, OUTPUT_TEMPLATE, "#call_purpose","Inline call",'#source_system',"Inline source")If no value is provided for a particular request parameter, Spark will revert to the default values: compiler_type : Type3, source_system : Spark Assistant xCall, call_purpose : xCall. Public API support in XcallXcall now supports public APIs on the coherent.global domain. Simply specify the full endpoint URL as the first parameter of Xcall and proceed as normal.Example for a standard/private APIC.Spark_Xcall("XCallV3/service8","input_location")Example for a public APIC.Spark_Xcall("https://excel.dev.coherent.global/coherent/api/v3/public/xcall/execute/batch/{service_uri}") New Recalculate SPARK_XCALL function button in Spark AssistantWhen using Xcall, it’s always best to refresh the functions when re-opening or returning to an Excel file after a long period of time. Even if the inputs are the same, it’s possible there may be a new version of the service you’re interacting with. Spark Assistant will refresh all Xcall functions in an Excel file automatically every time you log in, but this release introduces a manual method via a new button in the Build menu. Clicking Recalculate SPARK_XCALL function will re-run all of the Xcall functions simultaneously. Import/Export Tool - Define Destination FolderOnce a service has passed testing in UAT, the next step is usually to push it to the production environment. Re-uploading the model to production manually is not recommended since there is a risk of uploading a difference file or previous version by mistake, thereby potentially introducing untested changes to production. Coherent’s Import/Export (“ImpEx”) Tool enables customers to promote files from test environments through to production safely and enforces good deployment practices.Some customers have a setup wherein folder names (and sometimes even Service names) differ between Production and UAT Spark environments. The Coherent ImpEx Tool now supports such scenarios. Customers may now export a particular service from one folder in UAT and then upload it into a different folder within their production environment. Previously, Spark determined the destination of an upload via the name of the exported folder. Now, within the ImpEx manifest file, users may define an alternative destination for the output. If the specified alternative destination exists, the upload will be treated as an update to the existing service and the version will be incremented accordingly. Validation API now includes default values For new services, the Validation API now includes default values in its output, enabling consumers of the API to more quickly understand what these are without having to look elsewhere.When the Validation API request_meta 's validation_type is equal to default_values, the resulting output will include a default_value field containing the resolved value at the time of model upload. By contrast, when validation_type = dynamic, default values will not be included, since they don’t apply. Please note, this feature only applies to new services; outputs for existing services will not include default values.Example output containing default_value { "status": "Success", "response_data": { "outputs": { "01_letter": { "validation_allow": "List", "validation_type": "static", "dependent_inputs": [ "02_number" ], "default_value": "a", "min": null, "max": null, "options": [ "a", "b" ], "ignore_blank": true },... Date and Time Format UpdatesDates & times are now displayed in a standardized format across the Spark platform. Enhanced UI components help users to select a date and time with separate fields for year, month, date, hour, minute, and second. All times will use the 24-hour format – AM/PM options have been removed. The resulting value will be stored in accordance with the international datetime standard: YYYY-MM-DD HH:MM:SS, for example, 2023-05-29 09:30:00.Outside of API tester, dates and times will be separated by a comma wherever they are displayed, for example, 2023-05-29, 09:30:00. Improvements to Service Documentation Properties: File UUID, File Hash, OriginalServiceHash, and CompiledOutputHashWe’re improving traceability across environments and between deployments by implementing consistent hashes and identifiers for compiled Spark and Neuron assets.Service UUID (Universally Unique IDentifier) and Version UUID serve as unique references inside a single tenant or instance. Since metadata information about deployment is also considered when assigning these UUIDs, however, new IDs may be generated when deploying to an on-premises hybrid runner. To help you understand the provenance of a given service (e.g. "What is this thing that was built, tested, and approved in UAT but then rolled out to production globally and to five regional hybrid runners?") we also have a File UUID, which is embedded in the package at compile time and logged. By having an identifier for a file that persists and does not change across environments or deployments, customers can truly follow the journey of any asset through each step of the process. Along with these generated IDs, we also store “hashes” of the uploaded file, the original service, and the compiled Neuron Wasm. These hashes are like fingerprints for files and allow us to verify that files are the same as what was originally uploaded or that distributed code is the same as what was originally compiled. Please note, additional UUIDs and hashes will not be appended to existing services ex post facto, so these improvements only apply to new services. Compiler version is now visible in Service Documentation > Version DetailsAmong the previously mentioned new service documentation properties is also the Neuron Compiler version. As with the above changes, this applies only to new services, and will not be displayed for existing services or where Neuron is not configured. Improvements to Add New Version screenWhen adding a new version of a service, the upload summary modal window now shows the differences between the model being uploaded and the current published version of the service. This information can help in determining how to populate the Upgrade type, Version label, and Release Notes fields. When selecting an upgrade type, Spark will display a semantic version preview based on the type selected. If any warnings are generated during the conversion process, these can be viewed by clicking or tapping the “See warning details” button.Once the file has been converted and compilation is complete, a success message will be displayed, together with the estimate of the effort and cost savings vs. traditional development. Additional confirmation step when deleting user groupsTo limit the possibility of accidental deletion of user groups, we’ve introduced an additional step to the process. Users will now be required to type the word DELETE into a confirmation modal window. Additional messaging has also been added to emphasize the impact of deleting user groups and subsequent recovery challenges. Download service file naming conventionWe’ve simplified the naming convention used by Spark when generating service files for download. Downloaded services will now be named according to the following convention: Original Service: <original file name> Example: If the original file name is Add.xlsx, then the downloaded service file name will be Add.xlsx Configured Service: <service name [Version] (Configured).extension> Example: If the original file name is ‘Sum_service’, then the downloaded service file name will be Sum_service [0.1.0] (Configured).xlsx WebAssembly module: <service name [Version] (Wasm).zip> Example: If the original file name is 'Sum_service', then the downloaded service file name will be Sum_service [0.1.0] (Wasm).zip As before, users may still rename the files as desired. API Tester Documentation Template Changes - More Sample rows and Add Request HeadersService documentation has been improved to include three rows of sample data for Inputs and Outputs (up from one) and to include a sample Request Header. Larger samples allow for more varied examples.

Related products:Spark

Spark

 API Call History In this release, we updated the design and function of the API Call History page. Search and filtering options easily narrow down the list of calls, now with drop-down menus for call purpose and user too. We also made the table easier to navigate by rearranging columns, adding Correlation ID and Call Purpose, and keeping the checkboxes and actions columns static to avoid unnecessary scrolling. Finally, we made some backend enhancements, resulting in improved performance for larger history sizes! Other changes Recent Activity Log - When a user restores a service to an earlier version, this is now reported in the Recent Activity Log. Testing Center - Testbed tables no longer include input columns for sub-services that are not selected. Previously the columns were present but blank. Testing Center - Spark now displays a warning when uploading a testbed template whose inputs/outputs do not match that of the Service. Compare versions - After requesting a service Version Comparison Report, if a user minimizes the progress modal window, they may now re-display it via the Background Activity dropdown (located in the top-right of the Spark interface) to view the comparison summary before downloading the report. Service Download - To make it more obvious when a user is working on a “Configured Service” as opposed to an “Original Service”, Spark now injects an additional tab into the file when the user opts to download a Configured Service. This tab includes details on the version downloaded. Insights - Some adjustments have been made to how services are counted in Insights. New service uploads will no longer be considered updates, and new versions of a service will only be considered updates.  Security Single Sign OnCustomers can now set up Single Sign On (SSO) for their Spark tenants and control access and permissions automatically. This allows network administrators to easily set up single-click access to Spark for their organization and to use existing user groups. For more details, see our instructions on using Azure AD as an identity provider (IdP). Instructions for other IdPs can be added on request. Features PermissionsWe have improved some of our backend implementations around Features permissions. The Features permissions screen gives tenant-administrators highly granular control over API Key access to microservices in the Spark backend. This allows useful integrations with backend APIs that can: download converted logic, download the API call history, orchestrate some of the testing capabilities outside of Spark, and many others.To use Features Permissions:1. Set up a new user group with a descriptive name, such as user:api_service_management. 2. Then create a new API Key Group. Include the newly-created user group as well as all user groups needed to access the target services. This will automatically create a new API Key. 3. Go to Options > Tenant configuration > Features permissions and find a feature that you would like to call with the API Key, Spark.DownloadCsvLog.json for example, which can download the API Call History as a CSV file. 4. View the feature and add the user group created in Step 1. 5. The API key created in Step 2 can now call the feature in Step 3. For help, please reach out to our Customer Success team! Spark FormsFrontend apps can now display a service's name when calling the service using the ID or version methods since folder_name and service_name are now included in the response_meta section of getFormSpec API responses. Previously, this was only possible when calling the service using the service name method. Spark AssistantIn this release of Spark Assistant, we have now prefixed all previous Coherent functions with C., for example, =C.SPARK_XCALL or =C.SPARK_XMLTOJSON This is to help better distinguish Coherent functions from other add-ins. Any services uploaded to Spark with the older convention, e.g. SPARK_XCALL will still work in the Spark UI and APIs; if they are being used in Spark Assistant, however, we recommend adding the prefix to existing functions in order to interact with the calculated outputs in Excel.

Related products:SparkSpark AssistantSpark Forms

Xcall

In March we’re introducing a new version of Xcall, an easier way to bring in calculations and data from other Spark services. Together with the Spark Assistant, you can now use Xcall as a native Excel formula, without using VBA or LAMBDA. The new Xcall formula =SPARK_XCALL() consists of four components, each of which builds on top of the previous one to help you construct your call. First, you'll be able to search through all the services you can access, then pick one and see all the available versions. Next, you'll be able to generate input and output templates for the service, from which you can choose only those inputs you want to change and only those outputs you wish to receive. Finally, when all four parameters are defined, Xcall will retrieve the requested data from the specified Spark service. This visual, iterative process is highly intuitive and eliminates the need to use arcane, nested formulas or learn another programming language!Xcall enables common calculations to be standardized and centralized for use across an organization, eliminating local variations. Define the calculations as functions in one place, and reference them wherever they are needed. You can be confident in the knowledge that it’s always the correct version!Xcall also offers flexibility when running tests or performing scenario analysis by enabling you to fix certain inputs and easily manipulate others. This shows how your calculations are impacted by changes in assumptions, empowering you to call on your shared functions to produce complex scenarios, simulations, and more!For more information about setting up Spark Assistant and using the new Xcall, please consult our documentation. If you run into any issues or have questions, contact Support. Model ComparisonSpark now offers the ability to compare any two versions of an Excel model. To see it in action: navigate to the Version Overview page, select two versions, click Compare, and Spark will generate a Model Comparison Report with a detailed analysis of the differences, clearly identifying what’s been changed. This greatly facilitates understanding of the model's development process and eliminates the need for tedious and time-consuming manual comparison. You also always have a precise and up-to-date record of the model’s change history, which is ideal for audits! WebhooksOne of our customers was looking for a way to initiate an action with Spark whenever a service is uploaded. To address this requirement, we implemented a webhook for Spark!Webhooks are a type of system-to-system communication following the Hollywood principle of “Don’t call us; we’ll call you!” One system sends a message to another system as soon as a specific event occurs, similar to a push notification on your smartphone. In the case of Spark, our webhook can pass information based on events such as: uploading a service, updating a service, running a testbed, and more. This introduces new ways to embed Spark capabilities more deeply into an operational process without having to develop a lot of additional code! Use cases range from simple actions such as developing notifications, assigning tasks to a workflow, or automating deployment!If you are interested in having a webhook set up for your environment, please contact the Coherent team. We will be adding the ability to configure your own webhooks in a future release! Integration improvementsWith this release, we made it even easier to integrate your Spark service APIs. First, we added data object and response data object definitions to the Swagger definitions Spark provides. Swagger definitions are like blueprints for the APIs and can be used to: Easily import API calls into Postman and low-code tools. Create (and embed) interactive API documentation via Swagger UI or Redocly. Second, all of our code snippets now include the Spark tenant name; there’s no need to look this up manually! Bespoke user interfaces for every model uploadSpark Forms brings Spark’s power to your phone, your tablet, that old computer at the local library… almost any device! For every Excel model you upload, Spark can now generate a bespoke, responsive, web-based app. Share the link with colleagues (or display the QR code for them to scan) and get everyone testing and providing feedback sooner. You can even embed Spark Forms within your workbench or testing suite for convenience.Use our web-based form editor to customize your input labels, reorder, or even remove them to create a custom testing interface. All without touching your original model. Reduce clutter and bring focus to the inputs (and outputs) that matter most.Speak to your Customer Service representative about enabling Forms for your tenant and for a demo. Other notable enhancements Improved ability to handle large CSV outputs by splitting them into multiple files. The API Call History functionality can now be disabled upon customer request. Improvements to Neuron around the DATEDIF functionality. Proper support for single row (1xN>1) vectors in the API and UI. Fixed Execute API requested_outputs to accommodate multiple filter items using either a string or JSON array. Updated the Import Export tool to use .NET 6.0. It now works on Mac computers with ARM-based Apple silicon (M1 and M2 processors)! Did you know?We've been curious about how ChatGPT 4 from OpenAI can be used in combination with Excel but haven't seen many industry experts share any specifics. So, we went straight to the source! Here are the top four ways ChatGPT4 will be able to support Excel users, as shared by ChatGPT4 itself.Should you have any questions about this release or interest in any of the updates above, please reach out to us via email.

Related products:Spark

Tag your services

This month, we’re introducing tags - a flexible and familiar way to organize your Spark services. Tags can be used to organize and group services across folders, or to mark service versions. You can choose from a list of administrator-defined tags when uploading services and you can edit your tags at any time from the service version properties screen.Enhancements are coming! Let us know how you use tags and how they help you work better, faster, and smarter.Note: The tagging feature must be set up by your administrator. Contact our Customer Success team for more information. Correlate API calls to integrating systemsWant to make sure you always know who’s calling your API and why? Trying to understand more about a specific API call to Spark? Spark can store additional information about each API call in 3 different fields: source_system, call_purpose, and correlation_id. This makes it simple for an integrating system to identify itself and the type of call it’s making. It can also assign a unique identifier (such as a quotation number or transaction ID) to a call so everyone can easily find it again and other teams can review the related data in more detail.While this information was previously accessible when viewing individual call details or downloading call history as an Excel file, you can now use the search box on the API Call History screen to search source_system, call_purpose, and correlation_id. This makes it much faster to audit an individual API call within the Spark interface. Within the API Call History screen, it is also possible to: 1) review the specific components of the API request and response, 2) download the API call back into the Excel file, and 3) run the request again in the API Tester to examine the request and response components.Read more about Spark Execute API request parameters in our documentation.Sample code to pass to the Spark APIThe API Call History can be searched using additional search terms What’s cooking at Coherent An improved version of Xcall (our way of connecting data from different Spark services) is coming soon. We will make significant enhancements to the functionality and usability to promote better self-discovery and easier implementation. You will also soon be able to compare differences in Excel workbooks between Spark service versions. Spark will provide an Excel file that highlights the differences between two versions of a Spark service. We are extending our integration with Snowflake by listing Spark in the Snowflake Marketplace. We are working to make Spark Assistant available in the Microsoft Office Add-Ins Store. No more working through manifest files and organizational administration, just install directly from Excel!  Other notable change You can now download the entire Neuron compiler log output for diagnosis. We are making continuous enhancements to Neuron for better file and function support for Excel edge cases, including: using CLEAN, ISOWEEKNUM, TEXTAFTER, TEXTSPLIT, WEEKNUM, and WORKDAY.INTL improving the functionality of IFS and SWITCH functions, when used within other formulas. ensuring the correct behavior of INDEX for non-contiguous ranges. performance improvements for LARGE and SMALL functions. trimming extra rows from the bottom when using Xrichoutput The Insights page now shows the lines of code and hours you have saved by using Spark. You can now give all X-mappings names that start with numbers (for example, Xinput_123 or XCSVOutput_24thMarch)  Did you know? Spark helps you manage end-user computing (EUC) risk According to research from Chartis, as much as $12.1 billion within the world’s 50 largest financial institutions could be at risk from improper use of EUC tools such as spreadsheets. Read our article here There’s an app for that Our partners and customers are using Spark to generate apps in days, not months. Check out the latest video—and follow us on LinkedIn for more insights and conversations. Meet us at Insurtech Insights Europe in London March 1-2 We’ll be presenting at the Guidewire booth and look forward to meeting many partners and customers in person. Let us know if you’ll be there! https://www.insurtechinsights.com/europe/ Should you have any questions about this release or interest in any of the updates above, please reach out to us via email.

Related products:Spark

New settings for tenant administrators!

Managing your tenant settings have just gotten a whole lot easier!With the new Tenant Configuration page, tenant administrators can conveniently configure their tenant settings from the Options hub user interface. This space currently includes settings such as, enabling ‘Public’ visibility for Spark service APIs, permission controls, IP allowlisting, etc., however, more will be added in the future. Integrate with Spark easily with your favorite programming language!We’ve made it even easier for developers to integrate with Spark using the programming language of your choice! Leverage the capability of Spark without having to worry about the code needed to integrate with Spark APIs.This functionality is available through the new “Code snippet” implemented in the integration tab of API tester. This tab allows users to view the code needed to integrate with Spark’s APIs in 25+ languages such as cURL, JavaScript, go, Python, and many more. Choose your favorite language from the dropdown to see the resulting code snippet, Spark will even remember your selection for your next API! Coherent Campus is SSO great!Accessing Coherent Campus has just got a whole lot easier for you. Coherent Campus now has SSO (Single Sign On) directly from your Spark main user menu! There is no need to log in separately to Coherent Campus anymore.Once you are logged into your Spark tenant, click on Coherent Campus from the user menu. This will automatically log you into your Coherent Campus profile. FAQsI already have a Coherent Campus profile. Will my profile information stay the same? Yes. If you have an existing Coherent Campus profile, you will not lose any profile information and the link will take you to your existing account. I don’t have a Coherent Campus profile. Will it automatically create a new profile for me? Yes. If you don’t have an existing Coherent Campus profile, don’t worry, it will create a new one for you based on your tenant credentials. Note: You will no longer be able to log in via the original sign-in page. Please access Coherent Campus from the Spark user interface. Explainer training courses are now in Coherent Campus!Good news! Coherent Campus has launched the Explainer training courses. You will learn how to use Explainer to bring quotes, illustrations, scenarios and actionable advice to life, so your sales team can deliver an informative and personalized sales experience.What will you learn from the Coherent Explainer Training courses?You will be able to: configure an Explainer application using a template. understand how to customize an Explainer template to further fit your brand needs. Start learning now at https://campus.coherent.global/Notable changes Refreshed documentation content with the Neuron release. Improved performance and stability of the testing center. Continuous improvements to support for even larger and complex files. Address issues with uploading from OneDrive locations. Fixed a number of bugs in the Spark Assistant comparison tool. Improvements to the Validation API to support Neuron services. We are working on the ability to add tags to services in a future release! Should you have any questions about this release or interest in any of the updates above, please reach out to us via email.

Related products:Spark