-
Notifications
You must be signed in to change notification settings - Fork 6
[NAE-2267] For some mongo collections there are no indexes #386
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: release/7.0.0-rev9
Are you sure you want to change the base?
Conversation
Introduced a new @indexable annotation to flag classes for MongoDB indexing. Updated Task, PetriNet, and Case classes to use @indexable and adjusted the Mongo index configurator to process this annotation. This simplifies index configuration and enhances maintainability.
WalkthroughThis PR introduces a new Changes
Estimated code review effort🎯 2 (Simple) | ⏱️ ~10 minutes
Possibly related PRs
Pre-merge checks❌ Failed checks (1 warning, 1 inconclusive)
✅ Passed checks (1 passed)
📜 Recent review detailsConfiguration used: CodeRabbit UI Review profile: ASSERTIVE Plan: Pro 📒 Files selected for processing (1)
🧰 Additional context used🧠 Learnings (1)📓 Common learnings🧬 Code graph analysis (1)nae-object-library/src/main/java/com/netgrif/application/engine/objects/annotations/Indexable.java (1)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (7)
🔇 Additional comments (1)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 8
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (4)
application-engine/src/main/java/com/netgrif/application/engine/configuration/properties/ImpersonationConfigurationProperties.java (1)
18-21: Update the documentation to reflect the new SpEL expression.The documentation comment still references
redisProperties.namespace, but the actual implementation now usesredisProperties.session.namespace.Apply this diff to update the documentation:
/** * The Redis namespace used for storing impersonation configurations. - * This value is resolved from the "redisProperties.namespace" SpEL expression. + * This value is resolved from the "redisProperties.session.namespace" SpEL expression. */Dockerfile (1)
1-17: Consider adding HEALTHCHECK and USER directives for production hardening.While not blocking, these improvements align with container security best practices:
- Add
HEALTHCHECKinstruction to enable container orchestration platforms to monitor application health- Add
USERdirective to run as non-root for improved security postureExample:
+RUN addgroup --system netgrif && adduser --system --ingroup netgrif netgrif +RUN chown -R netgrif:netgrif /opt/netgrif/engine + WORKDIR /opt/netgrif/engine EXPOSE 8080 +HEALTHCHECK --interval=30s --timeout=3s --start-period=40s --retries=3 \ + CMD curl -f http://localhost:8080/actuator/health || exit 1 + +USER netgrif ENTRYPOINT ["java","-Dfile.encoding=UTF-8","-jar","app.jar"]application-engine/src/main/groovy/com/netgrif/application/engine/petrinet/domain/dataset/logic/action/ActionDelegate.groovy (1)
2419-2427: Add null/empty check beforenew ObjectId(role.processId)to prevent runtime exceptionThe code at line 2424 calls
new ObjectId(role.processId)without validation. However, theProcessRoleclass constructor explicitly handles nullprocessId(line 55 in nae-object-library), proving the field can be null. If a non-global role has null or emptyprocessId, this will throwIllegalArgumentExceptionorNullPointerExceptionat runtime.Add a defensive check similar to the pattern in
Task.java(lines 172–185):if (role.processId != null && !role.processId.isEmpty()) { PetriNet net = petriNetService.get(new ObjectId(role.processId)) // use net... } else { // handle missing processId case (skip role or use fallback) }nae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/elastic/domain/MapField.java (1)
16-38: I18nString transition looks consistent; consider not exposing the internal mutable map directlyThe switch of constructors to
Map.Entry<String, I18nString>aligns with the rest of the I18nString‑based mapping and looks correct.
getKeyValueTranslations()currently exposessuper.keyValueTranslationsdirectly, which allows callers to mutate the internal state of the superclass:@Field(type = Flattened, index = false) public Map<String, I18nString> getKeyValueTranslations() { return super.keyValueTranslations; }If external mutation is not desired, consider returning an unmodifiable view:
@Field(type = Flattened, index = false) public Map<String, I18nString> getKeyValueTranslations() { - return super.keyValueTranslations; + return java.util.Collections.unmodifiableMap(super.keyValueTranslations); }Otherwise, current mapping annotations look fine.
♻️ Duplicate comments (3)
nae-user-ce/pom.xml (1)
9-9: Version downgrade from RC9 to RC8.2.This module also exhibits the parent version downgrade from
7.0.0-RC9to7.0.0-RC8.2. Please see the verification comment onnae-object-library/pom.xmlfor the consolidated analysis of this version change across all modules.pom.xml (1)
9-9: Root parent POM version downgraded from RC9 to RC8.2.This is the parent POM that propagates the version to all child modules. The version downgrade requires verification as noted in the
nae-object-library/pom.xmlreview comment.application-engine/pom.xml (1)
9-9: Version downgrade from RC9 to RC8.2.This module also exhibits the parent version downgrade. Please see the verification comment on
nae-object-library/pom.xmlfor the consolidated analysis.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: ASSERTIVE
Plan: Pro
📒 Files selected for processing (36)
Dockerfile(1 hunks)application-engine/pom.xml(1 hunks)application-engine/src/main/groovy/com/netgrif/application/engine/petrinet/domain/dataset/logic/action/ActionDelegate.groovy(1 hunks)application-engine/src/main/java/com/netgrif/application/engine/configuration/AbstractSecurityConfiguration.java(2 hunks)application-engine/src/main/java/com/netgrif/application/engine/configuration/ElasticServiceConfiguration.java(3 hunks)application-engine/src/main/java/com/netgrif/application/engine/configuration/SessionConfiguration.java(3 hunks)application-engine/src/main/java/com/netgrif/application/engine/configuration/SessionConfigurationStaticEnabled.java(1 hunks)application-engine/src/main/java/com/netgrif/application/engine/configuration/properties/DataConfigurationProperties.java(9 hunks)application-engine/src/main/java/com/netgrif/application/engine/configuration/properties/ImpersonationConfigurationProperties.java(1 hunks)application-engine/src/main/java/com/netgrif/application/engine/elastic/domain/BulkOperationWrapper.java(1 hunks)application-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticCaseMappingService.java(4 hunks)application-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticCaseService.java(4 hunks)application-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticIndexService.java(2 hunks)application-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticQueueManager.java(1 hunks)application-engine/src/main/java/com/netgrif/application/engine/elastic/service/interfaces/IElasticCaseService.java(0 hunks)application-engine/src/main/java/com/netgrif/application/engine/manager/service/SessionManagerService.java(1 hunks)application-engine/src/main/java/com/netgrif/application/engine/workflow/service/CaseEventHandler.java(1 hunks)application-engine/src/main/resources/application.yaml(1 hunks)application-engine/src/test/resources/application-test.yaml(1 hunks)nae-object-library/pom.xml(1 hunks)nae-object-library/src/main/java/com/netgrif/application/engine/objects/annotations/Indexable.java(1 hunks)nae-object-library/src/main/java/com/netgrif/application/engine/objects/elastic/domain/ElasticTask.java(2 hunks)nae-object-library/src/main/java/com/netgrif/application/engine/objects/elastic/domain/I18nStringUtils.java(1 hunks)nae-object-library/src/main/java/com/netgrif/application/engine/objects/elastic/domain/MapField.java(1 hunks)nae-object-library/src/main/java/com/netgrif/application/engine/objects/petrinet/domain/PetriNet.java(3 hunks)nae-object-library/src/main/java/com/netgrif/application/engine/objects/workflow/domain/Case.java(4 hunks)nae-object-library/src/main/java/com/netgrif/application/engine/objects/workflow/domain/Task.java(2 hunks)nae-spring-core-adapter/pom.xml(1 hunks)nae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/configuration/AbstractMongoIndexesConfigurator.java(2 hunks)nae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/elastic/domain/ElasticCase.java(1 hunks)nae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/elastic/domain/ElasticPetriNet.java(1 hunks)nae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/elastic/domain/ElasticTask.java(1 hunks)nae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/elastic/domain/MapField.java(2 hunks)nae-user-ce/pom.xml(1 hunks)nae-user-common/pom.xml(1 hunks)pom.xml(1 hunks)
💤 Files with no reviewable changes (1)
- application-engine/src/main/java/com/netgrif/application/engine/elastic/service/interfaces/IElasticCaseService.java
🧰 Additional context used
🧠 Learnings (12)
📓 Common learnings
Learnt from: machacjozef
Repo: netgrif/application-engine PR: 367
File: application-engine/src/main/resources/application.yaml:24-24
Timestamp: 2025-10-20T11:44:44.907Z
Learning: In the netgrif/application-engine project, the correction of the Elasticsearch task index name from "_taks" to "_task" in application.yaml was approved by maintainer machacjozef, indicating that any data migration concerns for this typo fix are handled separately or not applicable to their deployment scenario.
📚 Learning: 2025-11-04T11:47:03.897Z
Learnt from: renczesstefan
Repo: netgrif/application-engine PR: 378
File: nae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/elastic/domain/ElasticCase.java:16-16
Timestamp: 2025-11-04T11:47:03.897Z
Learning: In the netgrif/application-engine project, ElasticsearchRunner is responsible for creating Elasticsearch indexes automatically based on configuration. Spring Data's automatic index creation via the Document annotation's createIndex attribute should be disabled (set to false) to prevent conflicts with this centralized index management approach.
Applied to files:
nae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/elastic/domain/ElasticTask.javanae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/elastic/domain/ElasticCase.javaapplication-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticIndexService.javanae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/elastic/domain/ElasticPetriNet.javanae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/configuration/AbstractMongoIndexesConfigurator.javaapplication-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticCaseService.javaapplication-engine/src/main/java/com/netgrif/application/engine/configuration/ElasticServiceConfiguration.java
📚 Learning: 2025-10-20T11:44:44.907Z
Learnt from: machacjozef
Repo: netgrif/application-engine PR: 367
File: application-engine/src/main/resources/application.yaml:24-24
Timestamp: 2025-10-20T11:44:44.907Z
Learning: In the netgrif/application-engine project, the correction of the Elasticsearch task index name from "_taks" to "_task" in application.yaml was approved by maintainer machacjozef, indicating that any data migration concerns for this typo fix are handled separately or not applicable to their deployment scenario.
Applied to files:
nae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/elastic/domain/ElasticTask.javanae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/elastic/domain/ElasticCase.javaapplication-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticIndexService.javaapplication-engine/src/main/resources/application.yamlnae-object-library/src/main/java/com/netgrif/application/engine/objects/workflow/domain/Task.javanae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/elastic/domain/ElasticPetriNet.javaapplication-engine/src/test/resources/application-test.yamlnae-object-library/src/main/java/com/netgrif/application/engine/objects/elastic/domain/ElasticTask.javaapplication-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticCaseService.javaapplication-engine/src/main/java/com/netgrif/application/engine/configuration/ElasticServiceConfiguration.java
📚 Learning: 2025-08-19T20:07:15.621Z
Learnt from: renczesstefan
Repo: netgrif/application-engine PR: 339
File: application-engine/src/test/groovy/com/netgrif/application/engine/filters/FilterImportExportTest.groovy:341-341
Timestamp: 2025-08-19T20:07:15.621Z
Learning: In the Elasticsearch migration, the stringId property was removed from ElasticCase and ElasticTask, with the id property now containing the MongoDB case/task object ID directly. This consolidation eliminates redundancy and simplifies the data model.
Applied to files:
nae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/elastic/domain/ElasticTask.javanae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/elastic/domain/ElasticCase.javaapplication-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticIndexService.javaapplication-engine/src/main/java/com/netgrif/application/engine/workflow/service/CaseEventHandler.javaapplication-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticCaseMappingService.javanae-object-library/src/main/java/com/netgrif/application/engine/objects/elastic/domain/ElasticTask.javaapplication-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticCaseService.java
📚 Learning: 2025-07-31T23:40:46.499Z
Learnt from: tuplle
Repo: netgrif/application-engine PR: 334
File: application-engine/src/main/java/com/netgrif/application/engine/petrinet/service/PetriNetService.java:204-214
Timestamp: 2025-07-31T23:40:46.499Z
Learning: In the PetriNetService.importPetriNet method, existingNet.getVersion() cannot be null because all existing nets in the system were deployed through processes that ensure every net always has a version assigned.
Applied to files:
nae-object-library/src/main/java/com/netgrif/application/engine/objects/petrinet/domain/PetriNet.java
📚 Learning: 2025-09-29T10:31:57.325Z
Learnt from: renczesstefan
Repo: netgrif/application-engine PR: 362
File: application-engine/src/main/java/com/netgrif/application/engine/petrinet/service/ProcessRoleService.java:513-529
Timestamp: 2025-09-29T10:31:57.325Z
Learning: PetriNet.getStringId() returns a simple ObjectId string representation (_id.toString()), not a composite Netgrif ID format, so new ObjectId(petriNetId) works correctly when petriNetId comes from PetriNet.getStringId().
Applied to files:
nae-object-library/src/main/java/com/netgrif/application/engine/objects/petrinet/domain/PetriNet.java
📚 Learning: 2025-08-19T20:13:40.087Z
Learnt from: renczesstefan
Repo: netgrif/application-engine PR: 339
File: nae-object-library/src/main/java/com/netgrif/application/engine/objects/elastic/domain/CaseField.java:16-16
Timestamp: 2025-08-19T20:13:40.087Z
Learning: In CaseField.java, fulltextValue is mapped as a keyword field type in Elasticsearch (for exact matches, filtering, aggregations), while the separate caseValue field serves different Elasticsearch query requirements, allowing the system to support multiple query patterns on the same data through different field mappings.
Applied to files:
nae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/elastic/domain/ElasticCase.javaapplication-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticCaseMappingService.javaapplication-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticCaseService.javanae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/elastic/domain/MapField.javanae-object-library/src/main/java/com/netgrif/application/engine/objects/elastic/domain/MapField.java
📚 Learning: 2025-08-19T20:07:43.748Z
Learnt from: renczesstefan
Repo: netgrif/application-engine PR: 339
File: nae-object-library/src/main/java/com/netgrif/application/engine/objects/elastic/domain/CaseField.java:16-16
Timestamp: 2025-08-19T20:07:43.748Z
Learning: In CaseField.java, the separate caseValue field (List<String>) is intentionally maintained alongside fulltextValue for specific Elasticsearch query requirements, rather than being derived on-the-fly from fulltextValue.
Applied to files:
nae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/elastic/domain/ElasticCase.javaapplication-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticCaseMappingService.javanae-object-library/src/main/java/com/netgrif/application/engine/objects/workflow/domain/Case.javanae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/elastic/domain/MapField.javanae-object-library/src/main/java/com/netgrif/application/engine/objects/elastic/domain/MapField.java
📚 Learning: 2025-11-10T15:28:13.391Z
Learnt from: tuplle
Repo: netgrif/application-engine PR: 377
File: application-engine/src/main/resources/application.yaml:26-27
Timestamp: 2025-11-10T15:28:13.391Z
Learning: In the netgrif/application-engine project, all engine deployments within the same cluster intentionally share the same Redis session namespace. The change from database-specific namespace to a shared "netgrif" default (configurable via NETGRIF_ENGINE_DATA_REDIS_SESSION_NAMESPACE) is by design to enable session sharing across deployments.
Applied to files:
application-engine/src/main/java/com/netgrif/application/engine/configuration/SessionConfigurationStaticEnabled.javaapplication-engine/src/main/resources/application.yamlapplication-engine/src/main/java/com/netgrif/application/engine/configuration/AbstractSecurityConfiguration.javaapplication-engine/src/main/java/com/netgrif/application/engine/configuration/properties/ImpersonationConfigurationProperties.javaapplication-engine/src/main/java/com/netgrif/application/engine/configuration/SessionConfiguration.javaapplication-engine/src/main/java/com/netgrif/application/engine/manager/service/SessionManagerService.javaapplication-engine/src/main/java/com/netgrif/application/engine/configuration/properties/DataConfigurationProperties.java
📚 Learning: 2025-11-07T13:11:20.622Z
Learnt from: renczesstefan
Repo: netgrif/application-engine PR: 378
File: application-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticQueueManager.java:145-149
Timestamp: 2025-11-07T13:11:20.622Z
Learning: In the netgrif/application-engine project, failed Elasticsearch bulk operations in ElasticQueueManager should not be retried or re-queued within the flush() method. The system relies on scheduled reindexing and manual reindex endpoints as fallback mechanisms for failed operations, making in-method retries unnecessary and potentially problematic (could cause deadlocks, capacity issues, or infinite loops with bad data).
Applied to files:
application-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticIndexService.javaapplication-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticQueueManager.javaapplication-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticCaseService.java
📚 Learning: 2025-08-20T07:27:02.660Z
Learnt from: renczesstefan
Repo: netgrif/application-engine PR: 339
File: application-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticTaskQueueManager.java:38-45
Timestamp: 2025-08-20T07:27:02.660Z
Learning: When reviewing ElasticTaskQueueManager changes, task.getTask().getId() returns the document identifier while task.getTaskId() returns the business task identifier. The queue operations should use consistent identifiers throughout the lifecycle (scheduling, processing, cleanup).
Applied to files:
nae-object-library/src/main/java/com/netgrif/application/engine/objects/elastic/domain/ElasticTask.java
📚 Learning: 2025-07-29T17:19:18.300Z
Learnt from: tuplle
Repo: netgrif/application-engine PR: 331
File: application-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticPetriNetService.java:45-46
Timestamp: 2025-07-29T17:19:18.300Z
Learning: In ElasticPetriNetService class, petriNetService is properly initialized using Lazy setter injection rather than constructor injection. This pattern with Lazy Autowired setter methods is commonly used in Spring to resolve circular dependencies and is a valid alternative to constructor injection.
Applied to files:
application-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticCaseService.javaapplication-engine/src/main/java/com/netgrif/application/engine/configuration/ElasticServiceConfiguration.java
🧬 Code graph analysis (7)
nae-object-library/src/main/java/com/netgrif/application/engine/objects/annotations/Indexable.java (1)
nae-object-library/src/main/java/com/netgrif/application/engine/objects/petrinet/domain/PetriNet.java (1)
Indexable(29-441)
application-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticCaseMappingService.java (1)
nae-object-library/src/main/java/com/netgrif/application/engine/objects/elastic/domain/I18nStringUtils.java (1)
I18nStringUtils(8-18)
nae-object-library/src/main/java/com/netgrif/application/engine/objects/workflow/domain/Case.java (1)
nae-object-library/src/main/java/com/netgrif/application/engine/objects/petrinet/domain/PetriNet.java (1)
Indexable(29-441)
nae-object-library/src/main/java/com/netgrif/application/engine/objects/workflow/domain/Task.java (1)
nae-object-library/src/main/java/com/netgrif/application/engine/objects/petrinet/domain/PetriNet.java (1)
Indexable(29-441)
application-engine/src/main/java/com/netgrif/application/engine/configuration/SessionConfiguration.java (2)
application-engine/src/main/java/com/netgrif/application/engine/manager/service/SessionManagerService.java (1)
Slf4j(20-71)application-engine/src/main/java/com/netgrif/application/engine/configuration/SessionConfigurationStaticEnabled.java (1)
Configuration(11-24)
nae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/configuration/AbstractMongoIndexesConfigurator.java (1)
nae-object-library/src/main/java/com/netgrif/application/engine/objects/petrinet/domain/PetriNet.java (1)
Indexable(29-441)
nae-object-library/src/main/java/com/netgrif/application/engine/objects/elastic/domain/MapField.java (4)
nae-object-library/src/main/java/com/netgrif/application/engine/objects/elastic/domain/DataField.java (1)
Data(10-29)nae-object-library/src/main/java/com/netgrif/application/engine/objects/elastic/domain/I18nField.java (1)
Data(11-34)nae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/elastic/domain/MapField.java (1)
NoArgsConstructor(13-39)nae-object-library/src/main/java/com/netgrif/application/engine/objects/elastic/domain/I18nStringUtils.java (1)
I18nStringUtils(8-18)
🪛 Checkov (3.2.334)
Dockerfile
[low] 1-17: Ensure that HEALTHCHECK instructions have been added to container images
(CKV_DOCKER_2)
[low] 1-17: Ensure that a user for the container has been created
(CKV_DOCKER_3)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (5)
- GitHub Check: task-list-completed
- GitHub Check: task-list-completed
- GitHub Check: task-list-completed
- GitHub Check: task-list-completed
- GitHub Check: Build
🔇 Additional comments (27)
nae-object-library/src/main/java/com/netgrif/application/engine/objects/elastic/domain/ElasticTask.java (1)
11-26: Serializable integration looks fine; confirm serializability of all fields and subclassesMaking
ElasticTaskimplementSerializablewith an explicitserialVersionUIDand@Serialis correct and consistent with Java 21 usage. The only concern is runtime: all non‑transient fields (e.g.,I18nString, and any additional fields in concrete subclasses) must themselves beSerializableor handled specially, otherwise Java serialization will fail withNotSerializableException.Please verify:
I18nString(and any other custom types reachable from these fields) implementsSerializable, or- You intend to never use Java serialization over these specific fields, in which case consider marking non‑serializable ones as
transient.If subclasses introduce new fields, they should be checked for the same constraint and may warrant their own
serialVersionUIDif long‑term compatibility is required.application-engine/src/main/java/com/netgrif/application/engine/configuration/properties/ImpersonationConfigurationProperties.java (1)
22-23: Nested property path is correctly configured—no changes needed.Verification confirms the SpEL expression
#{redisProperties.session.namespace}resolves to a valid property chain. TheredisPropertiesbean provides access tosession.namespacethrough theEngineRedisSessionPropertiesclass, which inherits thenamespaceproperty from Spring Boot'sRedisSessionProperties. This property path is already actively used in the codebase (e.g.,DataConfigurationProperties.javalines 111–113), confirming the configuration is properly initialized and functional.Dockerfile (1)
1-1: LGTM! Base image upgrade to Eclipse Temurin.Eclipse Temurin is the official OpenJDK distribution from the Eclipse Foundation and provides better long-term support and regular security updates compared to the deprecated
openjdkimages.nae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/elastic/domain/ElasticTask.java (1)
19-19: LGTM! Disables automatic index creation as per project pattern.Setting
createIndex = falseprevents Spring Data Elasticsearch from automatically creating indexes, ensuring centralized index management through ElasticsearchRunner. This change is consistent with the established pattern for ElasticCase and ElasticPetriNet.Based on learnings.
nae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/elastic/domain/ElasticPetriNet.java (1)
20-20: LGTM! Consistent with centralized index management pattern.The addition of
createIndex = falsealigns with the project's approach to manage Elasticsearch indexes centrally through ElasticsearchRunner, preventing conflicts with Spring Data's automatic index creation.Based on learnings.
nae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/elastic/domain/ElasticCase.java (1)
16-16: LGTM! Disables automatic index creation.This change is consistent with the centralized index management pattern used across all Elasticsearch domain classes in the project.
Based on learnings.
nae-object-library/pom.xml (1)
10-10: Disregard this review comment—it is based on an incorrect premise.The review comment assumes the version downgrade occurs within this PR, but verification shows no POM files are modified in the current PR. All
pom.xmlfiles are unchanged and already set to version7.0.0-RC8.2from prior commits. The git history shows a version progression (RC9 → RC8.1 → RC8.2) in earlier commits, but the current PR contains no version changes.Likely an incorrect or invalid review comment.
nae-user-common/pom.xml (1)
9-9: Verify parent POM version downgrade is intentional.Line 9 shows a downgrade from 7.0.0-RC9 to 7.0.0-RC8.2. This is atypical for a bugfix PR and may indicate a merge conflict resolution or revert. Given that the PR objective involves adding MongoDB indexes, confirm that:
- The parent POM RC8.2 contains the necessary changes for NAE-2267 (indexing annotations, configuration, etc.)
- This downgrade is intentional and not accidental (RC9 > RC8.2)
- All dependent modules using the parent consistently target RC8.2
nae-spring-core-adapter/pom.xml (1)
10-10: Verify parent POM version downgrade is consistent and intentional.Line 10 shows the same downgrade pattern (7.0.0-RC9 → 7.0.0-RC8.2) as seen in nae-user-common/pom.xml. This coordinated downgrade across modules is atypical. Confirm that:
- The parent POM RC8.2 contains all indexing-related changes required by NAE-2267
- This is a deliberate update, not an accidental revert
- All modules are consistently aligned to RC8.2
nae-object-library/src/main/java/com/netgrif/application/engine/objects/workflow/domain/Task.java (1)
24-31: MarkingTaskas@Indexablealigns it with central index configurationAdding
@Indexablehere is consistent with the new resolver filter and will allow your Mongo index configurator to pick up Task’s custom@Indexedfields. No issues from the class side.nae-object-library/src/main/java/com/netgrif/application/engine/objects/workflow/domain/Case.java (1)
25-56: Case now indexable with indexes onvisualIdandprocessIdentifierAnnotating
Casewith@Indexableand adding custom@IndexedonvisualIdandprocessIdentifierlooks consistent with the new index configurator and should address missing Mongo indexes for common lookup fields. Assuming queries frequently filter on these, this is a solid change; no issues spotted here.nae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/configuration/AbstractMongoIndexesConfigurator.java (1)
37-42: No coverage gap detected – verify passedVerification confirms all three entities using custom
@Indexedannotations (PetriNet, Case, Task) are already marked@Indexable. Additionally, indexes configured viagetIndexes()are applied independently of the filter (inaddConfiguredFields()at lines 71–72), so no coverage gap exists for those either.However, the annotation definition shows
@Indexableis not marked@Inherited:public @interface Indexable { }This means if your codebase adds class hierarchies where a parent is
@Indexablebut a child is not, the child won't inherit the annotation—confirming the inheritance risk raised in the review. Consider marking@Indexablewith@Inheritedto prevent this in the future, though no current entities exploit this scenario.nae-object-library/src/main/java/com/netgrif/application/engine/objects/petrinet/domain/PetriNet.java (2)
3-4: LGTM: Indexing annotation imports added correctly.The imports for
@Indexableand@Indexedannotations are correctly added to support the indexing functionality introduced in this PR.
29-29: Class-level indexing annotation added appropriately.The
@Indexableannotation correctly marks the PetriNet class for indexing support.application-engine/src/main/java/com/netgrif/application/engine/configuration/properties/DataConfigurationProperties.java (3)
108-116: Verify Redis session namespace fallback logic against actual key namespace usageYou now special‑case
redis.getSession().getNamespace()to rewrite only when null/blank/"spring:session", and set it to"spring:session:" + databaseName. Depending on how theRedisIndexedSessionRepositorykey namespace is constructed elsewhere, this could produce a doublespring:sessionprefix or a namespace that diverges from the intended sharedNETGRIF_ENGINE_DATA_REDIS_SESSION_NAMESPACEbehavior.Please double‑check that:
- the value stored in
EngineRedisSessionProperties.namespacematches whatSessionConfiguration(and any other consumers) expect, and- the computed prefix used in
SessionManagerServiceis still aligned with the actual Redis key pattern.
277-282: Check impact of making Mongomodenullable
MongoProperties.modeis now declared without a default (private ClusterConnectionMode mode;). Any code that previously assumed a non‑null default (e.g.,SINGLE) and passes this directly into driver settings may now seenulland either NPE or silently fall back to driver defaults.Please verify usages of
MongoProperties.getMode()and ensure they either:
- handle
nullexplicitly (e.g. conditional call to builder), or- reintroduce a safe default where required.
715-844: RedisProperties refactor to nested Sentinel + session props looks correctThe split into
RedisSentinelPropertiesandEngineRedisSessionProperties(extendingRedisSessionProperties) with dedicatedsessionandsentinelfields is clear and matches the YAML structure (redis.session.namespace, sentinel nodes, auth, etc.). Defaults for SSL, sentinel nodes, and session limiting/filtering are conservative and safe.No functional issues spotted here; just ensure documentation/examples use the new nested properties.
application-engine/src/test/resources/application-test.yaml (1)
10-11: Test queue delay override is appropriateSetting
netgrif.engine.data.elasticsearch.queue.delay: 10for tests is consistent with the new queue configuration and should help tests execute faster without impacting production defaults.application-engine/src/main/resources/application.yaml (1)
25-29: Redis session namespace nesting matches intended shared‑namespace designMoving to
redis.session.namespace: ${NETGRIF_ENGINE_DATA_REDIS_SESSION_NAMESPACE:netgrif}undernetgrif.engine.data.redisaligns with the nestedEngineRedisSessionPropertiesand the previously agreed shared session namespace across deployments.Looks good; just ensure migration docs mention the new property path and env variable.
application-engine/src/main/java/com/netgrif/application/engine/manager/service/SessionManagerService.java (1)
28-32: EnsureredisUsernameKeystill matches actual Redis principal index key patternAfter moving the namespace under
redisProperties.getSession(),redisUsernameKeyis now built as:
DEFAULT_NAMESPACE + ":" + redisProperties.getSession().getNamespace() + ":index:...".Please verify this still matches the prefix actually used by
RedisIndexedSessionRepositoryfor the principal‑name index, especially in combination with the namespace initialization inDataConfigurationProperties.redisProperties()and any session configuration class. A mismatch here would makegetAllLoggedUsers()silently return no users even when sessions exist.application-engine/src/main/java/com/netgrif/application/engine/configuration/AbstractSecurityConfiguration.java (1)
222-235: Session and filter configuration correctly switched to nested Redis session propertiesUsing
redisProperties.getSession().isEnabledLimitSession()/getMaxSession()/isEnabledFilter()is consistent with the newEngineRedisSessionPropertiesstructure and should preserve existing behavior with the new configuration layout.No issues spotted here.
application-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticIndexService.java (1)
448-458: UsingelasticsearchConverter.mapObject(doc)for bulk updates is a sound fixSwitching the bulk update payload to
elasticsearchTemplate.getElasticsearchConverter().mapObject(doc)for both cases and tasks should align the serialized document structure with Spring Data Elasticsearch’s mapping expectations and the new Java client, while still usingdoc.getId()anddocAsUpsert(true)as before.Please verify via tests that the resulting JSON matches the index mapping (no missing fields or unexpected structure changes).
Also applies to: 470-480
application-engine/src/main/java/com/netgrif/application/engine/configuration/SessionConfigurationStaticEnabled.java (1)
11-16: Redis session namespace placeholder is consistent with shared session configurationUsing
spring:session:${netgrif.engine.data.redis.session.namespace}aligns this static configuration with the shared, configurable Redis namespace used elsewhere in the engine and keeps session namespace handling centralized.application-engine/src/main/java/com/netgrif/application/engine/configuration/ElasticServiceConfiguration.java (1)
23-40: Constructor-based wiring of ElasticCaseService and related dependencies looks soundMoving to
@RequiredArgsConstructorwithfinalfields and passing all required collaborators (ElasticCaseRepository,ElasticsearchTemplate, the appropriateExecutor,ElasticsearchProperties,IPetriNetService,IWorkflowService,IElasticCasePrioritySearch,ApplicationEventPublisher,ElasticsearchClient) intoElasticCaseServiceis consistent and much clearer than field injection.Primary and reindexing case services differ only by executor, which matches the intent of using separate thread pools for normal vs reindex operations.
Just keep the
ElasticCaseServiceconstructor signature in sync with this configuration when evolving it, as any mismatch will only surface at compile time.Also applies to: 57-71, 79-91
nae-object-library/src/main/java/com/netgrif/application/engine/objects/elastic/domain/MapField.java (1)
18-34: Translation-aware map constructor LGTMThe constructor cleanly captures keys plus I18n translations and keeps text/fulltext arrays in sync. Looks solid.
application-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticCaseMappingService.java (1)
96-139: Nice reuse of I18nString utilitiesSwitching multichoice/enum mapping to I18nString plus I18nStringUtils keeps translation handling centralized and null-safe. Good refactor.
application-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticCaseService.java (1)
98-122: Queue integration and shutdown wiring look goodWiring the dedicated index/delete queues and stopping them in @PreDestroy keeps the service tidy. Nice work.
|
|
||
| /** | ||
| * Configuration properties for handling queues in Elasticsearch operations. | ||
| * These properties specify the behavior of the ElasticQueueManager, | ||
| * including the maximum queue size, delay between flush operations, | ||
| * and the thread pool size for scheduled executor service tasks. | ||
| */ | ||
| @Valid | ||
| private QueueProperties queue = new QueueProperties(); | ||
|
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Elasticsearch queue properties look consistent; fix Javadoc type reference
The new queue configuration and QueueProperties (pool size, delay, maxBatchSize, maxQueueSize, Refresh refreshPolicy) are coherent and provide sensible defaults. However, the Javadoc for refreshPolicy still references {@link RefreshPolicy#NONE} while the field type is co.elastic.clients.elasticsearch._types.Refresh with default Refresh.False.
Recommend updating the Javadoc to reference the correct type/constant (e.g. Refresh.False) to avoid confusion.
Also applies to: 657-711
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is no such change in code already
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check the QueueProperties class definition to see the refreshPolicy field and its Javadoc
sed -n '657,711p' application-engine/src/main/java/com/netgrif/application/engine/configuration/properties/DataConfigurationProperties.java
Length of output: 2229
@Retoocs, the issue CodeRabbit is pointing out exists in the newly added QueueProperties class at line 709. Here's the inconsistency:
Current Javadoc (line 707-710):
/**
* Defines the refresh policy for Elasticsearch operations.
* Determines when changes made by bulk operations will be visible for search.
* Default value is {@link RefreshPolicy#NONE}, meaning no immediate refresh.
*/
private Refresh refreshPolicy = Refresh.False;The Issue:
- The Javadoc references
{@link RefreshPolicy#NONE}(which is fromorg.springframework.data.elasticsearch.core.RefreshPolicy) - But the actual field type is
Refresh(fromco.elastic.clients.elasticsearch._types.Refresh) with default valueRefresh.False
Suggested fix:
Update line 709 to reference the correct type and constant:
* Default value is {@link Refresh#False}, meaning no immediate refresh.This ensures the documentation matches the actual code implementation and won't confuse developers reading the Javadoc.
🧬 Code Graph Analysis Results
[file: application-engine/src/main/java/com/netgrif/application/engine/configuration/properties/DataConfigurationProperties.java]
-
MongoProperties bean method
- Context: Provides a MongoProperties bean and conditionally sets defaults based on the DataConfigurationProperties state.
- Description: If mongodb.drop is null, it adopts the top-level drop value. If mongodb.database is null, it uses the top-level databaseName. Returns the configured MongoProperties instance.
- Content (complete as in file):
@bean
@primary
public MongoProperties mongoProperties() {
if (mongodb.getDrop() == null) {
mongodb.setDrop(drop);
}
if (mongodb.getDatabase() == null) {
mongodb.setDatabase(databaseName);
}
return mongodb;
}
-
ElasticsearchProperties bean method
- Context: Provides an ElasticsearchProperties bean and fills in defaults for drop and index if not set.
- Description: If elasticsearch.drop is null, applies the top-level drop. If elasticsearch.index is null or empty, populates a default map of indices for petrinet, case, and task using the databaseName prefix.
- Content (complete as in file):
@bean
public ElasticsearchProperties elasticsearchProperties() {
if (elasticsearch.getDrop() == null) {
elasticsearch.setDrop(drop);
}
if (elasticsearch.getIndex() == null || elasticsearch.getIndex().isEmpty()) {
elasticsearch.setIndex(Map.of(
ElasticsearchProperties.PETRI_NET_INDEX, databaseName + "_petrinet",
ElasticsearchProperties.CASE_INDEX, databaseName + "_case",
ElasticsearchProperties.TASK_INDEX, databaseName + "_task"
));
}
return elasticsearch;
}
-
RedisProperties bean method
- Context: Provides a RedisProperties bean and ensures a scoped namespace for Redis sessions.
- Description: If the Redis session namespace is null/blank or the default "spring:session", it prefixes it with the databaseName to form "spring:session:".
- Content (complete as in file):
@bean
@primary
public RedisProperties redisProperties() {
String namespace = redis.getSession().getNamespace();
if (namespace == null || namespace.isBlank() || "spring:session".equals(namespace)) {
redis.getSession().setNamespace("spring:session:" + databaseName);
}
return redis;
}
🧠 Learnings used
Learnt from: renczesstefan
Repo: netgrif/application-engine PR: 378
File: application-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticQueueManager.java:145-149
Timestamp: 2025-11-07T13:11:20.622Z
Learning: In the netgrif/application-engine project, failed Elasticsearch bulk operations in ElasticQueueManager should not be retried or re-queued within the flush() method. The system relies on scheduled reindexing and manual reindex endpoints as fallback mechanisms for failed operations, making in-method retries unnecessary and potentially problematic (could cause deadlocks, capacity issues, or infinite loops with bad data).
Learnt from: renczesstefan
Repo: netgrif/application-engine PR: 339
File: application-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticTaskQueueManager.java:38-45
Timestamp: 2025-08-20T07:27:02.660Z
Learning: When reviewing ElasticTaskQueueManager changes, task.getTask().getId() returns the document identifier while task.getTaskId() returns the business task identifier. The queue operations should use consistent identifiers throughout the lifecycle (scheduling, processing, cleanup).
Learnt from: tuplle
Repo: netgrif/application-engine PR: 377
File: application-engine/src/main/resources/application.yaml:26-27
Timestamp: 2025-11-10T15:28:13.391Z
Learning: In the netgrif/application-engine project, all engine deployments within the same cluster intentionally share the same Redis session namespace. The change from database-specific namespace to a shared "netgrif" default (configurable via NETGRIF_ENGINE_DATA_REDIS_SESSION_NAMESPACE) is by design to enable session sharing across deployments.
Learnt from: Retoocs
Repo: netgrif/application-engine PR: 383
File: application-engine/src/main/java/com/netgrif/application/engine/startup/ApplicationRunnerOrderResolver.java:43-43
Timestamp: 2025-11-14T10:22:01.634Z
Learning: For the netgrif/application-engine repository, avoid flagging trivial or nitpick-level issues such as redundant null checks, minor code style improvements, or obvious simplifications that don't affect functionality or introduce bugs. Focus review comments on substantive issues like logic errors, security concerns, performance problems, or breaking changes.
Learnt from: machacjozef
Repo: netgrif/application-engine PR: 367
File: application-engine/src/main/resources/application.yaml:24-24
Timestamp: 2025-10-20T11:44:44.907Z
Learning: In the netgrif/application-engine project, the correction of the Elasticsearch task index name from "_taks" to "_task" in application.yaml was approved by maintainer machacjozef, indicating that any data migration concerns for this typo fix are handled separately or not applicable to their deployment scenario.
| if (hasCredentials(redisProperties.getUsername(), redisProperties.getPassword())) { | ||
| redisStandaloneConfiguration.setUsername(redisProperties.getUsername()); | ||
| redisStandaloneConfiguration.setPassword(redisProperties.getPassword()); | ||
| } | ||
| return new JedisConnectionFactory(redisStandaloneConfiguration); | ||
| JedisClientConfiguration clientConfiguration = jedisClientConfiguration(); | ||
| return new JedisConnectionFactory(redisStandaloneConfiguration, clientConfiguration); | ||
| } | ||
|
|
||
| protected JedisConnectionFactory redisSentinelConfiguration() { | ||
| RedisSentinelConfiguration sentinelConfiguration = new RedisSentinelConfiguration(); | ||
| sentinelConfiguration.setMaster(redisProperties.getSentinel().getMaster()); | ||
| List<RedisNode> nodes = redisProperties.getSentinel().getNodes().stream().map(node -> { | ||
| try { | ||
| return RedisNode.fromString(node); | ||
| } catch (Exception e) { | ||
| log.warn("Parsing redis sentinel node {} has failed. Trying to use the value as an address without port and adding default sentinel port {}", node, RedisNode.DEFAULT_SENTINEL_PORT, e); | ||
| return new RedisNode(node, RedisNode.DEFAULT_SENTINEL_PORT); | ||
| } | ||
| }).toList(); | ||
| sentinelConfiguration.setSentinels(nodes); | ||
|
|
||
| if (hasCredentials(redisProperties.getUsername(), redisProperties.getPassword())) { | ||
| sentinelConfiguration.setUsername(redisProperties.getUsername()); | ||
| sentinelConfiguration.setPassword(redisProperties.getPassword()); | ||
| } | ||
| if (hasCredentials(redisProperties.getSentinel().getUsername(), redisProperties.getSentinel().getPassword())) { | ||
| sentinelConfiguration.setSentinelUsername(redisProperties.getSentinel().getUsername()); | ||
| sentinelConfiguration.setSentinelPassword(redisProperties.getSentinel().getPassword()); | ||
| } | ||
|
|
||
| JedisClientConfiguration clientConfiguration = jedisClientConfiguration(); | ||
| return new JedisConnectionFactory(sentinelConfiguration, clientConfiguration); | ||
| } | ||
|
|
||
| protected JedisClientConfiguration jedisClientConfiguration() { | ||
| if (redisProperties.isSsl()) { | ||
| return JedisClientConfiguration.builder().useSsl().build(); | ||
| } | ||
| return JedisClientConfiguration.defaultConfiguration(); | ||
| } | ||
|
|
||
| private boolean hasCredentials(String username, String password) { | ||
| return username != null && !username.isBlank() && | ||
| password != null && !password.isBlank(); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do not require username to send Redis passwords
hasCredentials forces both username and password to be non-blank before we call setPassword(...). Many deployments (including the default Redis user) supply only a password. After this change their password is silently dropped and every connection will fail with NOAUTH, which is a hard regression.
Set username and password independently so password-only auth keeps working (same applies to sentinel credentials).
- if (hasCredentials(redisProperties.getUsername(), redisProperties.getPassword())) {
- redisStandaloneConfiguration.setUsername(redisProperties.getUsername());
- redisStandaloneConfiguration.setPassword(redisProperties.getPassword());
- }
+ if (hasText(redisProperties.getUsername())) {
+ redisStandaloneConfiguration.setUsername(redisProperties.getUsername());
+ }
+ if (hasText(redisProperties.getPassword())) {
+ redisStandaloneConfiguration.setPassword(redisProperties.getPassword());
+ }
@@
- if (hasCredentials(redisProperties.getUsername(), redisProperties.getPassword())) {
- sentinelConfiguration.setUsername(redisProperties.getUsername());
- sentinelConfiguration.setPassword(redisProperties.getPassword());
- }
- if (hasCredentials(redisProperties.getSentinel().getUsername(), redisProperties.getSentinel().getPassword())) {
- sentinelConfiguration.setSentinelUsername(redisProperties.getSentinel().getUsername());
- sentinelConfiguration.setSentinelPassword(redisProperties.getSentinel().getPassword());
- }
+ if (hasText(redisProperties.getUsername())) {
+ sentinelConfiguration.setUsername(redisProperties.getUsername());
+ }
+ if (hasText(redisProperties.getPassword())) {
+ sentinelConfiguration.setPassword(redisProperties.getPassword());
+ }
+ if (hasText(redisProperties.getSentinel().getUsername())) {
+ sentinelConfiguration.setSentinelUsername(redisProperties.getSentinel().getUsername());
+ }
+ if (hasText(redisProperties.getSentinel().getPassword())) {
+ sentinelConfiguration.setSentinelPassword(redisProperties.getSentinel().getPassword());
+ }
@@
- private boolean hasCredentials(String username, String password) {
- return username != null && !username.isBlank() &&
- password != null && !password.isBlank();
+ private boolean hasText(String value) {
+ return value != null && !value.isBlank();
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (hasCredentials(redisProperties.getUsername(), redisProperties.getPassword())) { | |
| redisStandaloneConfiguration.setUsername(redisProperties.getUsername()); | |
| redisStandaloneConfiguration.setPassword(redisProperties.getPassword()); | |
| } | |
| return new JedisConnectionFactory(redisStandaloneConfiguration); | |
| JedisClientConfiguration clientConfiguration = jedisClientConfiguration(); | |
| return new JedisConnectionFactory(redisStandaloneConfiguration, clientConfiguration); | |
| } | |
| protected JedisConnectionFactory redisSentinelConfiguration() { | |
| RedisSentinelConfiguration sentinelConfiguration = new RedisSentinelConfiguration(); | |
| sentinelConfiguration.setMaster(redisProperties.getSentinel().getMaster()); | |
| List<RedisNode> nodes = redisProperties.getSentinel().getNodes().stream().map(node -> { | |
| try { | |
| return RedisNode.fromString(node); | |
| } catch (Exception e) { | |
| log.warn("Parsing redis sentinel node {} has failed. Trying to use the value as an address without port and adding default sentinel port {}", node, RedisNode.DEFAULT_SENTINEL_PORT, e); | |
| return new RedisNode(node, RedisNode.DEFAULT_SENTINEL_PORT); | |
| } | |
| }).toList(); | |
| sentinelConfiguration.setSentinels(nodes); | |
| if (hasCredentials(redisProperties.getUsername(), redisProperties.getPassword())) { | |
| sentinelConfiguration.setUsername(redisProperties.getUsername()); | |
| sentinelConfiguration.setPassword(redisProperties.getPassword()); | |
| } | |
| if (hasCredentials(redisProperties.getSentinel().getUsername(), redisProperties.getSentinel().getPassword())) { | |
| sentinelConfiguration.setSentinelUsername(redisProperties.getSentinel().getUsername()); | |
| sentinelConfiguration.setSentinelPassword(redisProperties.getSentinel().getPassword()); | |
| } | |
| JedisClientConfiguration clientConfiguration = jedisClientConfiguration(); | |
| return new JedisConnectionFactory(sentinelConfiguration, clientConfiguration); | |
| } | |
| protected JedisClientConfiguration jedisClientConfiguration() { | |
| if (redisProperties.isSsl()) { | |
| return JedisClientConfiguration.builder().useSsl().build(); | |
| } | |
| return JedisClientConfiguration.defaultConfiguration(); | |
| } | |
| private boolean hasCredentials(String username, String password) { | |
| return username != null && !username.isBlank() && | |
| password != null && !password.isBlank(); | |
| } | |
| if (hasText(redisProperties.getUsername())) { | |
| redisStandaloneConfiguration.setUsername(redisProperties.getUsername()); | |
| } | |
| if (hasText(redisProperties.getPassword())) { | |
| redisStandaloneConfiguration.setPassword(redisProperties.getPassword()); | |
| } | |
| JedisClientConfiguration clientConfiguration = jedisClientConfiguration(); | |
| return new JedisConnectionFactory(redisStandaloneConfiguration, clientConfiguration); | |
| } | |
| protected JedisConnectionFactory redisSentinelConfiguration() { | |
| RedisSentinelConfiguration sentinelConfiguration = new RedisSentinelConfiguration(); | |
| sentinelConfiguration.setMaster(redisProperties.getSentinel().getMaster()); | |
| List<RedisNode> nodes = redisProperties.getSentinel().getNodes().stream().map(node -> { | |
| try { | |
| return RedisNode.fromString(node); | |
| } catch (Exception e) { | |
| log.warn("Parsing redis sentinel node {} has failed. Trying to use the value as an address without port and adding default sentinel port {}", node, RedisNode.DEFAULT_SENTINEL_PORT, e); | |
| return new RedisNode(node, RedisNode.DEFAULT_SENTINEL_PORT); | |
| } | |
| }).toList(); | |
| sentinelConfiguration.setSentinels(nodes); | |
| if (hasText(redisProperties.getUsername())) { | |
| sentinelConfiguration.setUsername(redisProperties.getUsername()); | |
| } | |
| if (hasText(redisProperties.getPassword())) { | |
| sentinelConfiguration.setPassword(redisProperties.getPassword()); | |
| } | |
| if (hasText(redisProperties.getSentinel().getUsername())) { | |
| sentinelConfiguration.setSentinelUsername(redisProperties.getSentinel().getUsername()); | |
| } | |
| if (hasText(redisProperties.getSentinel().getPassword())) { | |
| sentinelConfiguration.setSentinelPassword(redisProperties.getSentinel().getPassword()); | |
| } | |
| JedisClientConfiguration clientConfiguration = jedisClientConfiguration(); | |
| return new JedisConnectionFactory(sentinelConfiguration, clientConfiguration); | |
| } | |
| protected JedisClientConfiguration jedisClientConfiguration() { | |
| if (redisProperties.isSsl()) { | |
| return JedisClientConfiguration.builder().useSsl().build(); | |
| } | |
| return JedisClientConfiguration.defaultConfiguration(); | |
| } | |
| private boolean hasText(String value) { | |
| return value != null && !value.isBlank(); | |
| } |
🤖 Prompt for AI Agents
In
application-engine/src/main/java/com/netgrif/application/engine/configuration/SessionConfiguration.java
around lines 48 to 92, the current hasCredentials method requires both username
and password to be present which drops password-only Redis auth; modify the
logic so username and password are checked and set independently: if password is
non-null/non-blank call setPassword(...), if username is non-null/non-blank call
setUsername(...); apply the same independent checks for
sentinelUsername/sentinelPassword (use separate conditions or replace
hasCredentials with two small helper checks) so password-only deployments
continue to authenticate correctly.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is no such change in code already
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Retoocs You're absolutely right! Looking at the diff, the old code also required both username and password:
Old code:
if(redisProperties.getUsername() != null && redisProperties.getPassword() !=null && !redisProperties.getUsername().isEmpty() && !redisProperties.getPassword().isEmpty())New code:
if (hasCredentials(redisProperties.getUsername(), redisProperties.getPassword()))Both versions check that username AND password are non-null/non-empty before setting credentials. The new code is just a refactoring that extracts this check into a helper method - there's no behavioral change.
If password-only authentication is indeed needed for your deployment scenarios, that would be a pre-existing limitation, not something introduced by this PR. The refactoring maintains the existing behavior.
🧠 Learnings used
Learnt from: tuplle
Repo: netgrif/application-engine PR: 377
File: application-engine/src/main/resources/application.yaml:26-27
Timestamp: 2025-11-10T15:28:13.391Z
Learning: In the netgrif/application-engine project, all engine deployments within the same cluster intentionally share the same Redis session namespace. The change from database-specific namespace to a shared "netgrif" default (configurable via NETGRIF_ENGINE_DATA_REDIS_SESSION_NAMESPACE) is by design to enable session sharing across deployments.
Learnt from: renczesstefan
Repo: netgrif/application-engine PR: 350
File: application-engine/src/main/java/com/netgrif/application/engine/startup/runner/SuperCreatorRunner.java:56-63
Timestamp: 2025-09-04T11:09:31.264Z
Learning: In SuperCreatorRunner.java, when creating a user with PasswordCredential via setCredential("password", passwordCredential), there's no need to also call setPassword() as it would be redundant - the PasswordCredential already contains the password information needed by UserServiceImpl.createUser(), and both setPassword() and setCredential() ultimately reference the same credential system.
Learnt from: renczesstefan
Repo: netgrif/application-engine PR: 350
File: application-engine/src/main/java/com/netgrif/application/engine/startup/runner/SuperCreatorRunner.java:56-63
Timestamp: 2025-09-04T11:09:31.264Z
Learning: In SuperCreatorRunner.java, when creating a user with PasswordCredential via setCredential("password", passwordCredential), there's no need to also call setPassword() as it would be redundant - the PasswordCredential already contains the password information needed by UserServiceImpl.createUser().
Learnt from: renczesstefan
Repo: netgrif/application-engine PR: 350
File: application-engine/src/main/java/com/netgrif/application/engine/startup/runner/SuperCreatorRunner.java:56-63
Timestamp: 2025-09-04T11:09:31.264Z
Learning: In SuperCreatorRunner.java, when creating a user with PasswordCredential via setCredential("password", passwordCredential), there's no need to also call setPassword() as it would be redundant - the PasswordCredential already contains the password information needed by UserServiceImpl.createUser().
Learnt from: machacjozef
Repo: netgrif/application-engine PR: 367
File: application-engine/src/main/resources/application.yaml:24-24
Timestamp: 2025-10-20T11:44:44.907Z
Learning: In the netgrif/application-engine project, the correction of the Elasticsearch task index name from "_taks" to "_task" in application.yaml was approved by maintainer machacjozef, indicating that any data migration concerns for this typo fix are handled separately or not applicable to their deployment scenario.
| @Data | ||
| @Builder | ||
| @AllArgsConstructor | ||
| public class BulkOperationWrapper { | ||
|
|
||
| private BulkOperation operation; | ||
|
|
||
| private EventObject publishableEvent; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧹 Nitpick | 🔵 Trivial
Consider making the wrapper immutable for safer use in queues/events
If BulkOperationWrapper instances are passed through queues and across threads, mutability via Lombok’s generated setters can be a source of subtle bugs. Consider making it immutable:
-@Data
-@Builder
-@AllArgsConstructor
-public class BulkOperationWrapper {
-
- private BulkOperation operation;
-
- private EventObject publishableEvent;
-}
+@lombok.Value
+@Builder
+@AllArgsConstructor
+public class BulkOperationWrapper {
+
+ BulkOperation operation;
+
+ EventObject publishableEvent;
+}This keeps the builder while removing setters and enforcing immutability.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| @Data | |
| @Builder | |
| @AllArgsConstructor | |
| public class BulkOperationWrapper { | |
| private BulkOperation operation; | |
| private EventObject publishableEvent; | |
| } | |
| @lombok.Value | |
| @Builder | |
| @AllArgsConstructor | |
| public class BulkOperationWrapper { | |
| BulkOperation operation; | |
| EventObject publishableEvent; | |
| } |
🤖 Prompt for AI Agents
In
application-engine/src/main/java/com/netgrif/application/engine/elastic/domain/BulkOperationWrapper.java
around lines 11 to 19, the class is mutable due to Lombok’s @Data (which
generates setters); make it immutable for safe usage across threads/queues by
replacing @Data and @AllArgsConstructor with Lombok’s @Value (or annotate fields
final and class with @Getter + no setters) and keep @Builder; mark the fields as
private final (remove any setter generation), remove the all-args constructor
visibility if needed, and rebuild so instances are immutable while still
supporting builder construction.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is no such change in code already
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Retoocs, understood! I'll note that the current mutable design works for your use case. Thanks for the clarification. 👍
🧠 Learnings used
Learnt from: machacjozef
Repo: netgrif/application-engine PR: 367
File: application-engine/src/main/resources/application.yaml:24-24
Timestamp: 2025-10-20T11:44:44.907Z
Learning: In the netgrif/application-engine project, the correction of the Elasticsearch task index name from "_taks" to "_task" in application.yaml was approved by maintainer machacjozef, indicating that any data migration concerns for this typo fix are handled separately or not applicable to their deployment scenario.
| log.debug("Index started with batch size: {} and id: {}", batch.size(), uuid); | ||
| elasticsearchClient.bulk(new BulkRequest.Builder().operations(batch.stream().map(BulkOperationWrapper::getOperation).toList()).refresh(queueProperties.getRefreshPolicy()).build()); | ||
| log.debug("Index finished with batch size: {} and id: {}", batch.size(), uuid); | ||
| checkQueue(); | ||
| } catch (Exception e) { | ||
| log.error("Bulk operation failed for batch id: {} with {} operations. " + | ||
| "Operations will be retried via scheduled indexing or manual reindex.", | ||
| uuid, batch.size(), e); | ||
| return; | ||
| } | ||
| try { | ||
| publishEventsOfBatch(batch); | ||
| } catch (Exception e) { | ||
| log.error("Event publishing failed for batch id: {}", uuid, e); | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle Elasticsearch bulk responses that report item failures
elasticsearchClient.bulk(...) returns a BulkResponse that can contain per-item failures without throwing. Right now we treat every call as success, drop the batch from the queue, and still publish events. If Elasticsearch returns errors=true, the documents never get indexed, yet we falsely signal success and lose the operations permanently. That’s a correctness regression.
Guard on BulkResponse.errors() before calling checkQueue() / publishEventsOfBatch, log the failure details, and bail so the fallback reindex paths can recover.
-import co.elastic.clients.elasticsearch.core.BulkRequest;
+import co.elastic.clients.elasticsearch.core.BulkRequest;
+import co.elastic.clients.elasticsearch.core.BulkResponse;
@@
- log.debug("Index started with batch size: {} and id: {}", batch.size(), uuid);
- elasticsearchClient.bulk(new BulkRequest.Builder().operations(batch.stream().map(BulkOperationWrapper::getOperation).toList()).refresh(queueProperties.getRefreshPolicy()).build());
- log.debug("Index finished with batch size: {} and id: {}", batch.size(), uuid);
+ log.debug("Index started with batch size: {} and id: {}", batch.size(), uuid);
+ BulkResponse response = elasticsearchClient.bulk(new BulkRequest.Builder()
+ .operations(batch.stream().map(BulkOperationWrapper::getOperation).toList())
+ .refresh(queueProperties.getRefreshPolicy())
+ .build());
+ if (response.errors()) {
+ log.error("Bulk operation finished with errors for batch id: {} ({} operations). Failing items will be handled by scheduled reindex or manual reindex.", uuid, batch.size());
+ return;
+ }
+ log.debug("Index finished with batch size: {} and id: {}", batch.size(), uuid);
checkQueue();There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is no such change in code already
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@Retoocs, you're correct—the current code doesn't check BulkResponse.errors(). Right now, Line 138 calls elasticsearchClient.bulk(...) but doesn't capture the response, so if Elasticsearch accepts the bulk request but reports item-level failures (which don't throw exceptions), the code proceeds to remove those operations from the queue and publish their events as if they succeeded.
This means documents that failed to index are silently lost, and the application thinks they were processed successfully. The catch (Exception e) block on Line 141 only handles thrown exceptions, not the case where the bulk call returns normally but with errors=true in the response.
The fix CodeRabbit suggested would:
- Capture the
BulkResponse - Check
response.errors() - Log the failure and return early (so
checkQueue()andpublishEventsOfBatch()aren't called) - Let your existing fallback mechanisms (scheduled reindexing / manual reindex endpoints) recover the failed operations
Do you plan to add this error handling in this PR, or would you prefer to address it in a follow-up issue?
🧠 Learnings used
Learnt from: renczesstefan
Repo: netgrif/application-engine PR: 378
File: application-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticQueueManager.java:145-149
Timestamp: 2025-11-07T13:11:20.622Z
Learning: In the netgrif/application-engine project, failed Elasticsearch bulk operations in ElasticQueueManager should not be retried or re-queued within the flush() method. The system relies on scheduled reindexing and manual reindex endpoints as fallback mechanisms for failed operations, making in-method retries unnecessary and potentially problematic (could cause deadlocks, capacity issues, or infinite loops with bad data).
Learnt from: machacjozef
Repo: netgrif/application-engine PR: 367
File: application-engine/src/main/resources/application.yaml:24-24
Timestamp: 2025-10-20T11:44:44.907Z
Learning: In the netgrif/application-engine project, the correction of the Elasticsearch task index name from "_taks" to "_task" in application.yaml was approved by maintainer machacjozef, indicating that any data migration concerns for this typo fix are handled separately or not applicable to their deployment scenario.
| String objectId = ((Document)document.get("_id")).get("shortProcessId") + "-" + ((Document)document.get("_id")).get("objectId").toString(); | ||
| if (objectId != null) { | ||
| service.remove(objectId); | ||
| return; | ||
| } | ||
|
|
||
| objectId = document.getObjectId("petriNetObjectId").toString(); | ||
| if (objectId != null) { | ||
| service.removeByPetriNetId(objectId); | ||
| return; | ||
| } | ||
|
|
||
| throw new IllegalStateException("Case has been deleted neither by ID nor by process ID!"); | ||
| service.remove(objectId); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
Add defensive null checks to prevent NPE and ClassCastException.
The nested field access on line 60 has multiple failure points without defensive checks:
document.get("_id")may return null or a non-Document typeshortProcessIdorobjectIdfields within the_idDocument may be null- Calling
.toString()on a nullobjectIdwill throw NPE
These runtime exceptions would prevent Elasticsearch cleanup and leave orphaned documents in the index.
Apply this diff to add defensive checks:
@Override
public void onAfterDelete(AfterDeleteEvent<Case> event) {
Document document = event.getDocument();
if (document == null) {
log.warn("Trying to delete null document!");
return;
}
- String objectId = ((Document)document.get("_id")).get("shortProcessId") + "-" + ((Document)document.get("_id")).get("objectId").toString();
- service.remove(objectId);
+ Object idObject = document.get("_id");
+ if (!(idObject instanceof Document)) {
+ log.error("Document _id is not a Document, cannot delete from Elasticsearch: {}", idObject);
+ return;
+ }
+
+ Document idDoc = (Document) idObject;
+ Object shortProcessId = idDoc.get("shortProcessId");
+ Object objectId = idDoc.get("objectId");
+
+ if (shortProcessId == null || objectId == null) {
+ log.error("Missing shortProcessId or objectId in document _id, cannot delete from Elasticsearch. shortProcessId: {}, objectId: {}", shortProcessId, objectId);
+ return;
+ }
+
+ String compositeId = shortProcessId + "-" + objectId.toString();
+ try {
+ service.remove(compositeId);
+ } catch (Exception e) {
+ log.error("Failed to remove case from Elasticsearch with id: {}", compositeId, e);
+ }
}Additionally, verify that IElasticCaseService.remove() correctly handles the "shortProcessId-objectId" format, and ensure this matches the format used in other event handlers (e.g., TaskEventHandler if it exists).
🏁 Script executed:
#!/bin/bash
# Description: Verify the objectId format is consistently used across event handlers and service methods.
# Search for other event handlers that construct similar composite IDs
echo "=== Searching for similar event handlers ==="
rg -n -A 5 -B 2 'onAfterDelete.*AfterDeleteEvent' --type java
echo -e "\n=== Searching for IElasticCaseService.remove usage ==="
ast-grep --pattern 'service.remove($_)'
echo -e "\n=== Searching for shortProcessId concatenation patterns ==="
rg -n 'shortProcessId.*-.*objectId' --type javaLength of output: 4437
I need to examine TaskEventHandler and verify the full context of both handlers, as the output shows they use identical patterns.
Now I'll examine the full context of both event handlers and search for the service method definition:
🏁 Script executed:
#!/bin/bash
# Get full context of both event handlers and verify _id structure
echo "=== CaseEventHandler.java (full onAfterDelete method) ==="
cat application-engine/src/main/java/com/netgrif/application/engine/workflow/service/CaseEventHandler.java
echo -e "\n=== TaskEventHandler.java (full onAfterDelete method) ==="
cat application-engine/src/main/java/com/netgrif/application/engine/workflow/service/TaskEventHandler.java
echo -e "\n=== Searching for IElasticCaseService.remove() definition ==="
rg -n 'void remove\|remove\(' --type java -B 2 -A 8 | grep -A 8 'interface IElastic\|class.*IElastic'
echo -e "\n=== Searching for Case domain class to understand _id structure ==="
find application-engine -name 'Case.java' -type f | head -5
echo -e "\n=== Searching for Task domain class to understand _id structure ==="
find application-engine -name 'Task.java' -type f | head -5Length of output: 4972
Add defensive null checks and fallback logic to match TaskEventHandler pattern.
The code on lines 60-61 lacks safeguards that the parallel TaskEventHandler implements. TaskEventHandler has:
- A null check:
if (document.get("_id") != null)before nested field access - Fallback removal logic via
processId - Error handling with
IllegalStateExceptionif both approaches fail
CaseEventHandler is missing these, creating NPE and ClassCastException risks:
document.get("_id")may be null, returning null when cast without check- Nested fields
shortProcessIdandobjectIdwithin the_idDocument may be null - No fallback if composite ID cannot be constructed
These vulnerabilities would prevent Elasticsearch cleanup and leave orphaned documents.
Apply this diff to align with TaskEventHandler's defensive pattern:
@Override
public void onAfterDelete(AfterDeleteEvent<Case> event) {
Document document = event.getDocument();
if (document == null) {
log.warn("Trying to delete null document!");
return;
}
- String objectId = ((Document)document.get("_id")).get("shortProcessId") + "-" + ((Document)document.get("_id")).get("objectId").toString();
- service.remove(objectId);
+ if (document.get("_id") != null) {
+ String objectId = ((Document)document.get("_id")).get("shortProcessId") + "-" + ((Document)document.get("_id")).get("objectId").toString();
+ service.remove(objectId);
+ return;
+ }
+
+ String processId = document.getString("processId");
+ if (processId != null) {
+ service.removeByPetriNetId(processId);
+ return;
+ }
+
+ throw new IllegalStateException("Case has been deleted neither by ID nor by process ID!");
}Committable suggestion skipped: line range outside the PR's diff.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is no such change in code already
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check the current state of TaskEventHandler to see if it has similar simplification
echo "=== Current TaskEventHandler.onAfterDelete implementation ==="
ast-grep --pattern $'onAfterDelete($$$) {
$$$
}' application-engine/src/main/java/com/netgrif/application/engine/workflow/service/TaskEventHandler.java
echo -e "\n=== Full TaskEventHandler file ==="
cat application-engine/src/main/java/com/netgrif/application/engine/workflow/service/TaskEventHandler.java
echo -e "\n=== Check if removeByPetriNetId still exists in IElasticCaseService ==="
rg -n 'removeByPetriNetId' --type java
echo -e "\n=== Check MongoDB AfterDeleteEvent documentation/usage patterns ==="
rg -n 'AfterDeleteEvent.*getDocument' --type java -A 3 -B 1
Length of output: 2818
@Retoocs Thank you for the clarification! You're absolutely right - I misunderstood the change. I see now that this code was intentionally removed (not missing), and that makes sense since IElasticCaseService.removeByPetriNetId() was deleted as part of this PR.
I have a few questions about this simplification:
-
Safety of the current approach: Can the
_idfield or its nested properties (shortProcessId,objectId) ever be null or missing in MongoDB'sAfterDeleteEvent? The current code on line 60 will throwNullPointerExceptionorClassCastExceptionif any of these are null/missing. Is this guaranteed by MongoDB's event structure? -
Inconsistency with TaskEventHandler: I notice that
TaskEventHandler.onAfterDelete()still retains the full defensive pattern with:- Null check:
if (document.get("_id") != null) - Fallback:
service.removeByPetriNetId(processId) - Error handling:
IllegalStateException
Is there a plan to update
TaskEventHandlerto match this simplified pattern, or is there a reason why tasks need the defensive approach while cases don't? - Null check:
The simplification makes the code cleaner, but I want to ensure edge cases are covered and that both handlers follow a consistent pattern.
🤔
🧬 Code Graph Analysis Results
nae-spring-core-adapter/src/main/java/com/netgrif/application/engine/adapter/spring/workflow/domain/Case.java
At line range 19-109:
@Document
@QueryEntity
public class Case extends com.netgrif.application.engine.objects.workflow.domain.Case {
public Case() {
super();
}
public Case(PetriNet petriNet) {
super(petriNet);
}
@Id
@Override
public ProcessResourceId get_id() {
return super.get_id();
}
@Override
@JsonIgnore
@org.springframework.data.mongodb.core.mapping.Field("activePlaces")
public Map<String, Integer> getActivePlaces() {
return super.getActivePlaces();
}
@Override
@JsonIgnore
public Map<String, Integer> getConsumedTokens() {
return super.getConsumedTokens();
}
@Override
@JsonIgnore
public Map<String, DataField> getDataSet() {
return super.getDataSet();
}
@Override
public Set<TaskPair> getTasks() {
return super.getTasks();
}
@Override
@JsonIgnore
public Set<String> getEnabledRoles() {
return super.getEnabledRoles();
}
@Override
@Transient
public List<Field<?>> getImmediateData() {
return super.getImmediateData();
}
@Override
@JsonIgnore
public Set<String> getImmediateDataFields() {
return super.getImmediateDataFields();
}
@Override
@LastModifiedDate
public LocalDateTime getLastModified() {
return super.getLastModified();
}
@Override
@Transient
@JsonIgnore
public PetriNet getPetriNet() {
return super.getPetriNet();
}
@Override
@JsonIgnore
public List<String> getViewUserRefs() {
return super.getViewUserRefs();
}
@Override
@JsonIgnore
public List<String> getViewUsers() {
return super.getViewUsers();
}
@Override
public ActorRef getAuthor() {
return super.getAuthor();
}
}
application-engine/src/main/java/com/netgrif/application/engine/importer/service/FieldFactory.java
At line range 31-862:
@org.springframework.stereotype.Component
@Slf4j
public final class FieldFactory {
@Autowired
private StorageConfigurationProperties fileStorageConfiguration;
@Autowired
private FormatFactory formatFactory;
@Autowired
private ViewFactory viewFactory;
@Autowired
private ComponentFactory componentFactory;
@Autowired
private IDataValidator dataValidator;
@Autowired
private IDataValidationExpressionEvaluator dataValidationExpressionEvaluator;
private StorageConfigurationProperties minIoProperties;
private IStorageResolverService storageResolverService;
@Autowired
public void setMinIoProperties(StorageConfigurationProperties minIoProperties) {
this.minIoProperties = minIoProperties;
}
@Autowired
public void setStorageResolverService(IStorageResolverService storageResolverService) {
this.storageResolverService = storageResolverService;
}
public static Set<I18nString> parseMultichoiceValue(Case useCase, String fieldId) {
Object values = useCase.getFieldValue(fieldId);
if (values instanceof ArrayList) {
return (Set<I18nString>) ((ArrayList) values).stream().map(val -> new I18nString(val.toString())).collect(Collectors.toCollection(LinkedHashSet::new));
} else {
return (Set<I18nString>) values;
}
}
public static Set<String> parseMultichoiceMapValue(Case useCase, String fieldId) {
Object values = useCase.getFieldValue(fieldId);
if (values instanceof ArrayList) {
return (Set<String>) ((ArrayList) values).stream().map(val -> val.toString()).collect(Collectors.toCollection(LinkedHashSet::new));
} else {
return (Set<String>) values;
}
}
public static Double parseDouble(Object value) {
if (value instanceof String) {
return Double.parseDouble((String) value);
} else if (value instanceof Integer) {
return ((Integer) value) * 1D;
} else if (value instanceof Double) {
return (Double) value;
}
return null;
}
public static LocalDate parseDate(Object value) {
if (value instanceof Date) {
return ((Date) value).toInstant().atZone(ZoneId.systemDefault()).toLocalDate();
} else if (value instanceof String) {
return parseDateFromString((String) value);
} else if (value instanceof LocalDate) {
return (LocalDate) value;
}
return null;
}
/**
* Available formats - YYYYMMDD; YYYY-MM-DD; DD.MM.YYYY
*
* @param value - Date as string
* @return Parsed date as LocalDate object or null if date cannot be parsed
*/
public static LocalDate parseDateFromString(String value) {
if (value == null)
return null;
List<String> patterns = Arrays.asList("dd.MM.yyyy");
try {
return LocalDate.parse(value, DateTimeFormatter.BASIC_ISO_DATE);
} catch (DateTimeParseException e) {
try {
return LocalDate.parse(value, DateTimeFormatter.ISO_DATE);
} catch (DateTimeParseException ex) {
for (String pattern : patterns) {
try {
return LocalDate.parse(value, DateTimeFormatter.ofPattern(pattern));
} catch (DateTimeParseException | IllegalArgumentException exc) {
continue;
}
}
}
}
LocalDateTime dateTime = parseDateTimeFromString(value);
if (dateTime != null) {
return dateTime.toLocalDate();
}
return null;
}
public static LocalDateTime parseDateTime(Object value) {
if (value == null)
return null;
if (value instanceof LocalDate)
return LocalDateTime.of((LocalDate) value, LocalTime.NOON);
else if (value instanceof String)
return parseDateTimeFromString((String) value);
else if (value instanceof Date)
return LocalDateTime.ofInstant(((Date) value).toInstant(), ZoneId.systemDefault());
return (LocalDateTime) value;
}
public static LocalDateTime parseDateTimeFromString(String value) {
if (value == null)
return null;
List<String> patterns = Arrays.asList("dd.MM.yyyy HH:mm", "dd.MM.yyyy HH:mm:ss");
try {
return LocalDateTime.parse(value, DateTimeFormatter.ISO_DATE_TIME);
} catch (DateTimeParseException e) {
try {
return LocalDateTime.parse(value, DateTimeFormatter.ISO_LOCAL_DATE_TIME);
} catch (DateTimeParseException ex) {
try {
return LocalDateTime.parse(value, DateTimeFormatter.ISO_INSTANT);
} catch (DateTimeParseException exc) {
for (String pattern : patterns) {
try {
return LocalDateTime.parse(value, DateTimeFormatter.ofPattern(pattern));
} catch (DateTimeParseException | IllegalArgumentException excp) {
continue;
}
}
}
}
}
return null;
}
public static I18nString parseEnumValue(Case useCase, String fieldId, EnumerationField field) {
Object value = useCase.getFieldValue(fieldId);
if (value instanceof String) {
for (I18nString i18nString : field.getChoices()) {
if (i18nString.contains((String) value)) {
return i18nString;
}
}
return new I18nString((String) value);
// throw new IllegalArgumentException("Value " + value + " is not a valid value.");
} else {
return (I18nString) value;
}
}
public static String parseEnumerationMapValue(Case useCase, String fieldId) {
Object value = useCase.getFieldValue(fieldId);
return value != null ? value.toString() : null;
}
// TODO: refactor this shit
Field getField(Data data, Importer importer) throws IllegalArgumentException, MissingIconKeyException {
Field field;
switch (data.getType()) {
case TEXT:
field = buildTextField(data);
break;
case BOOLEAN:
field = buildBooleanField(data);
break;
case DATE:
field = buildDateField(data);
break;
case FILE:
field = buildFileField(data);
break;
case FILE_LIST:
field = buildFileListField(data);
break;
case ENUMERATION:
field = buildEnumerationField(data, importer);
break;
case MULTICHOICE:
field = buildMultichoiceField(data, importer);
break;
case NUMBER:
field = buildNumberField(data);
break;
case USER:
field = buildUserField(data, importer);
break;
case USER_LIST:
field = buildUserListField(data, importer);
break;
case CASE_REF:
field = buildCaseField(data);
break;
case DATE_TIME:
field = buildDateTimeField(data);
break;
case BUTTON:
field = buildButtonField(data);
break;
case TASK_REF:
field = buildTaskField(data, importer.getDocument().getTransition());
break;
case ENUMERATION_MAP:
field = buildEnumerationMapField(data, importer);
break;
case MULTICHOICE_MAP:
field = buildMultichoiceMapField(data, importer);
break;
case FILTER:
field = buildFilterField(data);
break;
case I_18_N:
field = buildI18nField(data, importer);
break;
case STRING_COLLECTION:
field = buildStringCollectionField(data, importer);
break;
default:
throw new IllegalArgumentException(data.getType() + " is not a valid Field type");
}
field.setName(importer.toI18NString(data.getTitle()));
field.setImportId(data.getId());
field.setImmediate(data.isImmediate());
if (data.getLength() != null) {
field.setLength(data.getLength());
}
if (data.getDesc() != null)
field.setDescription(importer.toI18NString(data.getDesc()));
if (data.getPlaceholder() != null)
field.setPlaceholder(importer.toI18NString(data.getPlaceholder()));
if (data.getValid() != null) {
List<Valid> list = data.getValid();
for (Valid item : list) {
field.addValidation(makeValidation(item.getValue(), null, item.isDynamic()));
}
}
if (data.getValidations() != null) {
List<com.netgrif.application.engine.objects.importer.model.Validation> list = data.getValidations().getValidation();
for (com.netgrif.application.engine.objects.importer.model.Validation item : list) {
field.addValidation(makeValidation(item.getExpression().getValue(), importer.toI18NString(item.getMessage()), item.getExpression().isDynamic()));
}
}
if (data.getFormat() != null) {
Format format = formatFactory.buildFormat(data.getFormat());
field.setFormat(format);
}
if (data.getView() != null) {
log.warn("Data attribute [view] in field [" + field.getImportId() + "] is deprecated.");
View view = viewFactory.buildView(data);
field.setComponent(new Component(view.getValue()));
}
if (data.getComponent() != null) {
Component component = componentFactory.buildComponent(data.getComponent(), importer, data);
field.setComponent(component);
}
setActions(field, data);
setEncryption(field, data);
dataValidator.checkDeprecatedAttributes(data);
return field;
}
private StringCollectionField buildStringCollectionField(Data data, Importer importer) {
StringCollectionField field = new StringCollectionField();
setDefaultValues(field, data, defaultValues -> {
if (defaultValues != null) {
field.setDefaultValue(defaultValues);
}
});
return field;
}
private com.netgrif.application.engine.objects.petrinet.domain.dataset.logic.validation.Validation makeValidation(String rule, I18nString message, boolean dynamic) {
return dynamic ? new DynamicValidation(rule, message) : new com.netgrif.application.engine.objects.petrinet.domain.dataset.logic.validation.Validation(rule, message);
}
private TaskField buildTaskField(Data data, List<Transition> transitions) {
TaskField field = new TaskField();
setDefaultValues(field, data, defaultValues -> {
if (defaultValues != null && !defaultValues.isEmpty()) {
List<String> defaults = new ArrayList<>();
defaultValues.forEach(s -> {
if (transitions.stream().noneMatch(t -> t.getId().equals(s)))
log.warn("There is no transition with id [" + s + "]");
defaults.add(s);
});
field.setDefaultValue(defaults);
}
});
return field;
}
private MultichoiceField buildMultichoiceField(Data data, Importer importer) {
MultichoiceField field = new MultichoiceField();
if (data.getOptions() != null) {
setFieldOptions(field, data, importer);
} else {
setFieldChoices(field, data, importer);
}
setDefaultValues(field, data, init -> {
if (init != null && !init.isEmpty()) {
init = init.stream().map(String::trim).collect(Collectors.toList());
List<String> finalInits = init.stream().filter(i -> field.getChoices().stream().anyMatch(ch -> ch.getDefaultValue().equals(i))).collect(Collectors.toList());
List<String> unresolvedChoices = init.stream().filter(i -> field.getChoices().stream().noneMatch(ch -> ch.getDefaultValue().equals(i))).collect(Collectors.toList());
if (!unresolvedChoices.isEmpty()) {
finalInits.addAll(unresolvedChoices.stream().map(uch -> data.getOptions().getOption().stream().filter(o -> o.getKey().equals(uch)).findFirst().orElse(new Option()).getValue()).collect(Collectors.toList()));
finalInits.removeAll(Collections.singletonList(null));
}
field.setDefaultValues(finalInits);
}
});
return field;
}
private EnumerationField buildEnumerationField(Data data, Importer importer) {
EnumerationField field = new EnumerationField();
if (data.getOptions() != null) {
setFieldOptions(field, data, importer);
} else {
setFieldChoices(field, data, importer);
}
setDefaultValue(field, data, init -> {
if (init != null && !init.equals("")) {
String tempInit = init;
if (field.getChoices().stream().filter(ch -> ch.getDefaultValue().equals(tempInit)).findAny().isEmpty()) {
init = data.getOptions().getOption().stream().filter(o -> o.getKey().equals(tempInit)).findFirst().orElse(new Option()).getValue();
}
field.setDefaultValue(init);
}
});
return field;
}
private void setFieldChoices(ChoiceField<?> field, Data data, Importer importer) {
if (data.getValues() != null && !data.getValues().isEmpty() && data.getValues().get(0).isDynamic()) {
field.setExpression(new Expression(data.getValues().get(0).getValue()));
} else if (data.getValues() != null) {
List<I18nString> choices = data.getValues().stream()
.map(importer::toI18NString)
.collect(Collectors.toList());
field.getChoices().addAll(choices);
}
}
private MultichoiceMapField buildMultichoiceMapField(Data data, Importer importer) {
MultichoiceMapField field = new MultichoiceMapField();
setFieldOptions(field, data, importer);
setDefaultValues(field, data, init -> {
if (init != null && !init.isEmpty()) {
field.setDefaultValue(new LinkedHashSet<>(init));
}
});
return field;
}
private EnumerationMapField buildEnumerationMapField(Data data, Importer importer) {
EnumerationMapField field = new EnumerationMapField();
setFieldOptions(field, data, importer);
setDefaultValue(field, data, init -> {
if (init != null && !init.isEmpty()) {
field.setDefaultValue(init);
}
});
return field;
}
private void setFieldOptions(ChoiceField<?> field, Data data, Importer importer) {
if (data.getOptions() != null && data.getOptions().getInit() != null) {
field.setExpression(new Expression(data.getOptions().getInit().getValue()));
return;
}
List<I18nString> options = (data.getOptions() == null) ? new ArrayList<>() : data.getOptions().getOption().stream()
.map(importer::toI18NString)
.collect(Collectors.toList());
field.getChoices().addAll(options);
}
private void setFieldOptions(MapOptionsField<I18nString, ?> field, Data data, Importer importer) {
if (data.getOptions() != null && data.getOptions().getInit() != null) {
field.setExpression(new Expression(data.getOptions().getInit().getValue()));
return;
}
Map<String, I18nString> choices = (data.getOptions() == null) ? new LinkedHashMap<>() : data.getOptions().getOption().stream()
.collect(Collectors.toMap(Option::getKey, importer::toI18NString, (o1, o2) -> o1, LinkedHashMap::new));
field.setOptions(choices);
}
private TextField buildTextField(Data data) {
String value = null;
List<I18NStringTypeWithExpression> values = data.getValues();
if (values != null && !values.isEmpty())
value = values.get(0).getValue();
TextField field = new TextField(value);
setDefaultValue(field, data, field::setDefaultValue);
return field;
}
private BooleanField buildBooleanField(Data data) {
BooleanField field = new BooleanField();
setDefaultValue(field, data, defaultValue -> {
if (defaultValue != null) {
field.setDefaultValue(Boolean.valueOf(defaultValue));
}
});
return field;
}
private DateField buildDateField(Data data) {
DateField field = new DateField();
setDefaultValue(field, data, defaultValue -> {
if (defaultValue != null) {
field.setDefaultValue(parseDate(defaultValue));
}
});
return field;
}
private NumberField buildNumberField(Data data) {
NumberField field = new NumberField();
setDefaultValue(field, data, defaultValue -> {
if (defaultValue != null) {
field.setDefaultValue(Double.parseDouble(defaultValue));
}
});
return field;
}
private ButtonField buildButtonField(Data data) {
ButtonField field = new ButtonField();
setDefaultValue(field, data, defaultValue -> {
if (defaultValue != null) {
field.setDefaultValue(Integer.parseInt(defaultValue));
}
});
return field;
}
private DateTimeField buildDateTimeField(Data data) {
DateTimeField field = new DateTimeField();
setDefaultValue(field, data, defaultValue -> field.setDefaultValue(parseDateTime(defaultValue)));
return field;
}
private CaseField buildCaseField(Data data) {
AllowedNets nets = data.getAllowedNets();
CaseField field;
if (nets == null) {
field = new CaseField();
} else {
field = new CaseField(new ArrayList<>(nets.getAllowedNet()));
}
setDefaultValues(field, data, inits -> {
});
return field;
}
private UserField buildUserField(Data data, Importer importer) {
String[] roles = data.getValues().stream()
.map(value -> importer.getRoles().get(value.getValue()).getStringId())
.toArray(String[]::new);
UserField field = new UserField(roles);
setDefaultValues(field, data, inits -> {
field.setDefaultValue(null);
});
return field;
}
private UserListField buildUserListField(Data data, Importer importer) {
String[] roles = data.getValues().stream()
.map(value -> importer.getRoles().get(value.getValue()).getStringId())
.toArray(String[]::new);
UserListField field = new UserListField(roles);
setDefaultValues(field, data, inits -> {
});
return field;
}
private FileField buildFileField(Data data) {
FileField fileField = new FileField();
resolveStorage(data, fileField);
setDefaultValue(fileField, data, defaultValue -> {
if (defaultValue != null) {
fileField.setDefaultValue(defaultValue);
}
});
return fileField;
}
private FileListField buildFileListField(Data data) {
FileListField fileListField = new FileListField();
resolveStorage(data, fileListField);
setDefaultValues(fileListField, data, defaultValues -> {
if (defaultValues != null && !defaultValues.isEmpty()) {
fileListField.setDefaultValue(defaultValues);
}
});
return fileListField;
}
private FilterField buildFilterField(Data data) {
AllowedNets nets = data.getAllowedNets();
if (nets == null) {
return new FilterField();
} else {
return new FilterField(new ArrayList<>(nets.getAllowedNet()));
}
}
private I18nField buildI18nField(Data data, Importer importer) {
I18nField i18nField = new I18nField();
String initExpression = getInitExpression(data);
if (initExpression != null) {
i18nField.setInitExpression(new Expression(initExpression));
} else {
if (data.getInits() != null && data.getInits().getInit() != null && !data.getInits().getInit().isEmpty()) {
i18nField.setDefaultValue(new I18nString(data.getInits().getInit().get(0).getValue()));
} else if (data.getInit() != null && (data.getInit().getName() == null || data.getInit().getName().equals(""))) {
i18nField.setDefaultValue(new I18nString(data.getInit().getValue()));
} else if (data.getInit() != null && data.getInit().getName() != null && !data.getInit().getName().equals("")) {
i18nField.setDefaultValue(importer.toI18NString(data.getInit()));
} else {
i18nField.setDefaultValue(new I18nString(""));
}
}
return i18nField;
}
private void setActions(Field field, Data data) {
if (data.getAction() != null && data.getAction().size() != 0) {
// data.getAction().forEach(action -> field.addAction(action.getValue(), action.getTrigger()));
}
}
private void setEncryption(Field field, Data data) {
if (data.getEncryption() != null && data.getEncryption().isValue()) {
String encryption = data.getEncryption().getAlgorithm();
if (encryption == null)
encryption = "PBEWITHSHA256AND256BITAES-CBC-BC";
field.setEncryption(encryption);
}
}
public Field<?> buildFieldWithoutValidation(Case useCase, String fieldId, String transitionId) {
return buildField(useCase, fieldId, false, transitionId);
}
public Field<?> buildFieldWithValidation(Case useCase, String fieldId, String transitionId) {
return buildField(useCase, fieldId, true, transitionId);
}
private Field<?> buildField(Case useCase, String fieldId, boolean withValidation, String transitionId) {
Field<?> field = useCase.getPetriNet().getDataSet().get(fieldId);
resolveDataValues(field, useCase, fieldId);
resolveComponent(field, useCase, transitionId);
if (field instanceof ChoiceField)
resolveChoices((ChoiceField) field, useCase);
if (field instanceof MapOptionsField)
resolveMapOptions((MapOptionsField) field, useCase);
if (field instanceof FieldWithAllowedNets)
resolveAllowedNets((FieldWithAllowedNets) field, useCase);
if (field instanceof FilterField)
resolveFilterMetadata((FilterField) field, useCase);
if (withValidation)
resolveValidations(field, useCase);
return field;
}
@SuppressWarnings({"all", "rawtypes", "unchecked"})
private void resolveValidations(Field field, Case useCase) {
List<com.netgrif.application.engine.objects.petrinet.domain.dataset.logic.validation.Validation> validations = useCase.getDataField(field.getImportId()).getValidations();
if (validations != null) {
field.setValidations(validations.stream().map(it -> it.clone()).collect(Collectors.toList()));
}
if (field.getValidations() == null) return;
((List<com.netgrif.application.engine.objects.petrinet.domain.dataset.logic.validation.Validation>) field.getValidations()).stream()
.filter(it -> it instanceof DynamicValidation).map(it -> (DynamicValidation) it).forEach(valid -> {
valid.setCompiledRule(dataValidationExpressionEvaluator.compile(useCase, valid.getExpression()));
});
}
private void resolveChoices(ChoiceField field, Case useCase) {
Set<I18nString> choices = useCase.getDataField(field.getImportId()).getChoices();
if (choices == null)
return;
field.setChoices(choices);
}
private void resolveComponent(Field field, Case useCase, String transitionId) {
if (useCase.getDataField(field.getStringId()).hasComponent(transitionId)) {
field.setComponent(useCase.getDataField(field.getStringId()).getDataRefComponents().get(transitionId));
} else if (useCase.getDataField(field.getStringId()).hasComponent()) {
field.setComponent(useCase.getDataField(field.getStringId()).getComponent());
}
}
private void resolveMapOptions(MapOptionsField field, Case useCase) {
Map options = useCase.getDataField(field.getImportId()).getOptions();
if (options == null)
return;
field.setOptions(options);
}
private void resolveAllowedNets(FieldWithAllowedNets field, Case useCase) {
List<String> allowedNets = useCase.getDataField(field.getImportId()).getAllowedNets();
if (allowedNets == null)
return;
field.setAllowedNets(allowedNets);
}
private void resolveFilterMetadata(FilterField field, Case useCase) {
Map<String, Object> metadata = useCase.getDataField(field.getImportId()).getFilterMetadata();
if (metadata == null)
return;
field.setFilterMetadata(metadata);
}
public Field buildImmediateField(Case useCase, String fieldId) {
Field field = useCase.getPetriNet().getDataSet().get(fieldId).clone();
resolveDataValues(field, useCase, fieldId);
resolveAttributeValues(field, useCase, fieldId);
return field;
}
@SuppressWarnings("RedundantCast")
private void resolveDataValues(Field field, Case useCase, String fieldId) {
switch (field.getType()) {
case DATE:
parseDateValue((DateField) field, fieldId, useCase);
parseDateDefaultValue((DateField) field);
break;
case NUMBER:
field.setValue(parseNumberValue(useCase, fieldId));
break;
case ENUMERATION:
field.setValue(parseEnumValue(useCase, fieldId, (EnumerationField) field));
((EnumerationField) field).setChoices(getFieldChoices((ChoiceField<?>) field, useCase));
break;
case ENUMERATION_MAP:
field.setValue(parseEnumerationMapValue(useCase, fieldId));
((EnumerationMapField) field).setOptions(getFieldOptions((MapOptionsField<?, ?>) field, useCase));
break;
case MULTICHOICE_MAP:
field.setValue(parseMultichoiceMapValue(useCase, fieldId));
((MultichoiceMapField) field).setOptions(getFieldOptions((MapOptionsField<?, ?>) field, useCase));
break;
case MULTICHOICE:
field.setValue(parseMultichoiceValue(useCase, fieldId));
((MultichoiceField) field).setChoices(getFieldChoices((ChoiceField<?>) field, useCase));
break;
case DATETIME:
parseDateTimeValue((DateTimeField) field, fieldId, useCase);
break;
case FILE:
parseFileValue((FileField) field, useCase, fieldId);
break;
case FILELIST:
parseFileListValue((FileListField) field, useCase, fieldId);
break;
case USER:
parseUserValues((UserField) field, useCase, fieldId);
break;
case USERLIST:
parseUserListValues((UserListField) field, useCase, fieldId);
break;
default:
field.setValue(useCase.getFieldValue(fieldId));
}
}
private void parseUserValues(UserField field, Case useCase, String fieldId) {
DataField userField = useCase.getDataField(fieldId);
if (userField.getChoices() != null) {
Set<String> roles = userField.getChoices().stream().map(I18nString::getDefaultValue).collect(Collectors.toSet());
field.setRoles(roles);
}
field.setValue((UserFieldValue) useCase.getFieldValue(fieldId));
}
private void parseUserListValues(UserListField field, Case useCase, String fieldId) {
DataField userListField = useCase.getDataField(fieldId);
if (userListField.getChoices() != null) {
Set<String> roles = userListField.getChoices().stream().map(I18nString::getDefaultValue).collect(Collectors.toSet());
field.setRoles(roles);
}
field.setValue((UserListFieldValue) useCase.getFieldValue(fieldId));
}
private Double parseNumberValue(Case useCase, String fieldId) {
Object value = useCase.getFieldValue(fieldId);
return parseDouble(value);
}
private void parseDateValue(DateField field, String fieldId, Case useCase) {
Object value = useCase.getFieldValue(fieldId);
field.setValue(parseDate(value));
}
private void parseDateDefaultValue(DateField field) {
Object value = field.getDefaultValue();
field.setDefaultValue(parseDate(value));
}
private void parseDateTimeValue(DateTimeField field, String fieldId, Case useCase) {
Object value = useCase.getFieldValue(fieldId);
field.setValue(parseDateTime(value));
}
private void parseFileValue(FileField field, Case useCase, String fieldId) {
Object value = useCase.getFieldValue(fieldId);
if (value == null)
return;
if (value instanceof String) {
field.setValue((String) value);
} else if (value instanceof FileFieldValue) {
field.setValue((FileFieldValue) value);
} else
throw new IllegalArgumentException("Object " + value.toString() + " cannot be set as value to the File field [" + fieldId + "] !");
}
private void parseFileListValue(FileListField field, Case useCase, String fieldId) {
Object value = useCase.getFieldValue(fieldId);
if (value == null)
return;
if (value instanceof String) {
field.setValue((String) value);
} else if (value instanceof FileListFieldValue) {
field.setValue((FileListFieldValue) value);
} else {
throw new IllegalArgumentException("Object " + value.toString() + " cannot be set as value to the File list field [" + fieldId + "] !");
}
}
private void resolveAttributeValues(Field field, Case useCase, String fieldId) {
DataField dataField = useCase.getDataSet().get(fieldId);
if (field.getType().equals(FieldType.CASE_REF) || field.getType().equals(FieldType.FILTER)) {
List<String> allowedNets = new ArrayList<>(dataField.getAllowedNets());
((FieldWithAllowedNets) field).setAllowedNets(allowedNets);
}
if (field.getType().equals(FieldType.FILTER)) {
Map<String, Object> filterMetadata = new HashMap<>(dataField.getFilterMetadata());
((FilterField) field).setFilterMetadata(filterMetadata);
}
}
private <T> void setDefaultValue(Field<T> field, Data data, Consumer<String> setDefault) {
String initExpression = getInitExpression(data);
if (initExpression != null) {
field.setInitExpression(new Expression(initExpression));
} else {
setDefault.accept(resolveInit(data));
}
}
private <T> void setDefaultValues(Field<T> field, Data data, Consumer<List<String>> setDefault) {
String initExpression = getInitExpression(data);
if (initExpression != null) {
field.setInitExpression(new Expression(initExpression));
} else {
setDefault.accept(resolveInits(data));
}
}
private String getInitExpression(Data data) {
if (data.getInit() != null) {
if (data.getInit().isDynamic()) {
return data.getInit().getValue();
}
}
return null;
}
private String resolveInit(Data data) {
if (data.getInits() != null && data.getInits().getInit() != null) {
return data.getInits().getInit().get(0).getValue();
}
if (data.getInit() != null) return data.getInit().getValue();
return null;
}
private List<String> resolveInits(Data data) {
if (data.getInits() != null && data.getInits().getInit() != null) {
return data.getInits().getInit().stream().map(Init::getValue).collect(Collectors.toList());
}
if (data.getInit() != null) return Arrays.asList(data.getInit().getValue().split(","));
return Collections.emptyList();
}
private Set<I18nString> getFieldChoices(ChoiceField<?> field, Case useCase) {
if (useCase.getDataField(field.getImportId()).getChoices() == null) {
return field.getChoices();
} else {
return useCase.getDataField(field.getImportId()).getChoices();
}
}
private Map<String, I18nString> getFieldOptions(MapOptionsField<?, ?> field, Case useCase) {
if (useCase.getDataField(field.getImportId()).getOptions() == null) {
return (Map<String, I18nString>) field.getOptions();
} else {
return useCase.getDataField(field.getImportId()).getOptions();
}
}
private void resolveStorage(Data data, StorageField<?> field) {
field.setStorage(StorageFactory.createStorage(data, storageResolverService, fileStorageConfiguration.getDefaultType()));
}
}
🧠 Learnings used
Learnt from: renczesstefan
Repo: netgrif/application-engine PR: 339
File: application-engine/src/main/java/com/netgrif/application/engine/elastic/service/ElasticTaskQueueManager.java:38-45
Timestamp: 2025-08-20T07:27:02.660Z
Learning: When reviewing ElasticTaskQueueManager changes, task.getTask().getId() returns the document identifier while task.getTaskId() returns the business task identifier. The queue operations should use consistent identifiers throughout the lifecycle (scheduling, processing, cleanup).
Learnt from: renczesstefan
Repo: netgrif/application-engine PR: 339
File: application-engine/src/test/groovy/com/netgrif/application/engine/filters/FilterImportExportTest.groovy:341-341
Timestamp: 2025-08-19T20:07:15.621Z
Learning: In the Elasticsearch migration, the stringId property was removed from ElasticCase and ElasticTask, with the id property now containing the MongoDB case/task object ID directly. This consolidation eliminates redundancy and simplifies the data model.
Learnt from: machacjozef
Repo: netgrif/application-engine PR: 367
File: application-engine/src/main/resources/application.yaml:24-24
Timestamp: 2025-10-20T11:44:44.907Z
Learning: In the netgrif/application-engine project, the correction of the Elasticsearch task index name from "_taks" to "_task" in application.yaml was approved by maintainer machacjozef, indicating that any data migration concerns for this typo fix are handled separately or not applicable to their deployment scenario.
...ject-library/src/main/java/com/netgrif/application/engine/objects/annotations/Indexable.java
Outdated
Show resolved
Hide resolved
| public class I18nStringUtils { | ||
| public static List<String> collectTranslations(I18nString i18nString) { | ||
| List<String> translations = new ArrayList<>(); | ||
| if (i18nString == null) { | ||
| return translations; | ||
| } | ||
| translations.add(i18nString.getDefaultValue()); | ||
| translations.addAll(i18nString.getTranslations().values()); | ||
| return translations; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Guard against null translations/defaults to avoid NPEs and null entries
Current implementation assumes i18nString.getTranslations() and getDefaultValue() are non‑null. If either can be null, this will either throw an NPE or add nulls into the result list.
Consider tightening defensively:
public static List<String> collectTranslations(I18nString i18nString) {
List<String> translations = new ArrayList<>();
if (i18nString == null) {
return translations;
}
- translations.add(i18nString.getDefaultValue());
- translations.addAll(i18nString.getTranslations().values());
+ if (i18nString.getDefaultValue() != null) {
+ translations.add(i18nString.getDefaultValue());
+ }
+ if (i18nString.getTranslations() != null) {
+ translations.addAll(i18nString.getTranslations().values());
+ }
return translations;
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| public class I18nStringUtils { | |
| public static List<String> collectTranslations(I18nString i18nString) { | |
| List<String> translations = new ArrayList<>(); | |
| if (i18nString == null) { | |
| return translations; | |
| } | |
| translations.add(i18nString.getDefaultValue()); | |
| translations.addAll(i18nString.getTranslations().values()); | |
| return translations; | |
| } | |
| public class I18nStringUtils { | |
| public static List<String> collectTranslations(I18nString i18nString) { | |
| List<String> translations = new ArrayList<>(); | |
| if (i18nString == null) { | |
| return translations; | |
| } | |
| if (i18nString.getDefaultValue() != null) { | |
| translations.add(i18nString.getDefaultValue()); | |
| } | |
| if (i18nString.getTranslations() != null) { | |
| translations.addAll(i18nString.getTranslations().values()); | |
| } | |
| return translations; | |
| } | |
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
there is no such change in code already
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Find the I18nString class definition
fd -t f -e java -e groovy "I18nString.java" -x cat {}
Length of output: 3049
@Retoocs, you're correct that the current code doesn't have these null checks. I was suggesting adding them as a defensive measure.
Looking at the I18nString class definition, I can see:
defaultValueis aStringfield that can benull- addingnullto the translations list might not be desirabletranslationsis initialized to an emptyHashMapin constructors, but since the class uses Lombok's@Data, it generates a setter that could allow it to be set tonull, which would cause an NPE on.values()
So the null checks would prevent:
- Adding
nullentries to the result list whendefaultValueisnull - NPE if
translationsmap isnull(edge case, but possible via setter)
If you're confident these scenarios won't occur in practice (e.g., internal usage guarantees non-null values), the current implementation is fine. Otherwise, the defensive checks would make the utility more robust. Let me know your preference!
🧠 Learnings used
Learnt from: machacjozef
Repo: netgrif/application-engine PR: 367
File: application-engine/src/main/resources/application.yaml:24-24
Timestamp: 2025-10-20T11:44:44.907Z
Learning: In the netgrif/application-engine project, the correction of the Elasticsearch task index name from "_taks" to "_task" in application.yaml was approved by maintainer machacjozef, indicating that any data migration concerns for this typo fix are handled separately or not applicable to their deployment scenario.
Description
Fixes NAE-2267
Dependencies
No new dependencies were introduced
Third party dependencies
Blocking Pull requests
There are no dependencies on other PR
How Has Been This Tested?
This was tested manually and with unit tests.
Test Configuration
Checklist:
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.