Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

#3030 Add flatten Command for Objects to PPL #3267

Open
wants to merge 54 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
54 commits
Select commit Hold shift + click to select a range
d7b759d
Add flatten command to ANTLR lexer and parser.
currantw Jan 17, 2025
f3ed5e5
Skeleton implementation, tests, and documents with lots of TODOs.
currantw Jan 20, 2025
214458d
Initial implementation
currantw Jan 20, 2025
b17935c
Fix typo
currantw Jan 24, 2025
39d8587
Initial implementation
currantw Jan 27, 2025
def6c5f
Update/fix tests.
currantw Jan 27, 2025
a77ba30
Update integration tests to align with doc tests.
currantw Jan 31, 2025
c7e4baa
Minor cleanup.
currantw Jan 28, 2025
27e7bf8
Add `ExplainIT` tests for flatten
currantw Jan 28, 2025
c70ca31
Revert recursive flattening, add documentation, more test updates
currantw Jan 28, 2025
93bd2b8
One more doctest fix
currantw Jan 28, 2025
91b7df6
Fix `ExplainIT` error
currantw Jan 28, 2025
ab38a53
Add additional test case to `flatten.rst`
currantw Jan 28, 2025
654669c
Fix `FlattenCommandIT`, add additional test case.
currantw Jan 28, 2025
8091c8e
Fix `PhysicalPlanNodeVisitor` test coverage.
currantw Jan 28, 2025
fe748d3
Review: use `StringUtils.format` instead of `String.format`.
currantw Jan 29, 2025
afefb84
Fix `LogicalFlattenTest`.
currantw Jan 29, 2025
9c2e0c3
Simplify algorithm for `Analyzer`.
currantw Jan 29, 2025
4e80f0b
Update to support flattening nested structs.
currantw Jan 30, 2025
d81e4b5
Fix unrelated bug in `IPFUnctionsTest`.
currantw Jan 30, 2025
025cdfa
Update `IPFUnctionsTest` to anchor at start.
currantw Jan 30, 2025
88f3285
Minor cleanup.
currantw Jan 30, 2025
072e30c
Fix doctest formatting.
currantw Jan 30, 2025
89a9903
Address minor review comments.
currantw Jan 30, 2025
2e35642
Fix doc tests.
currantw Jan 31, 2025
ae683a5
Update integratation tests to align with doc tests.
currantw Jan 31, 2025
16e31a9
Review - minor documentation updates.
currantw Jan 31, 2025
a4eec2f
Remove double periods
currantw Feb 1, 2025
b9365ac
Add comment on `Map.equals`.
currantw Feb 1, 2025
a86cbb2
Remove unnecessary error checks.
currantw Feb 1, 2025
d636911
Update to maintain existing field.
currantw Feb 3, 2025
ed9e730
Update for test coverage
currantw Feb 3, 2025
ebdbac9
Simplify `Analyzer` implementation
currantw Feb 3, 2025
ad85abb
Rename `cities` dataset to `flatten`
currantw Feb 5, 2025
b20cb38
SpotlessApply
currantw Feb 5, 2025
2b3d867
Minor doc cleanup.
currantw Feb 5, 2025
4ca9054
Fix failing IT
currantw Feb 5, 2025
65cae37
Update incorrect documentation in `Analyzer.visitFlatten`.
currantw Feb 5, 2025
6bb94aa
Update integ and doc tests to add another example of original field b…
currantw Feb 6, 2025
1064d19
Review comment - move example to `Analyzer.visitFlatten` Javadoc.
currantw Feb 6, 2025
e35bc40
Review comment - update `Analyzer.visitFlatten` Javadoc to specify th…
currantw Feb 6, 2025
5977966
Review comment - remove unnecessary @Getter
currantw Feb 6, 2025
ca4357b
Review comments - add `testStructNestedDeep` test case
currantw Feb 6, 2025
db4944b
Review comments - add `testStructNestedDeep` test case
currantw Feb 6, 2025
5d0026b
Woops! Fix failing test.
currantw Feb 6, 2025
e7bade8
Review comments - extract `PathUtils` constants
currantw Feb 6, 2025
7d33cd8
Review comments - update `Analyzer` to not use `Optional`.
currantw Feb 7, 2025
7e473ee
Bunch of additional review comments.
currantw Feb 7, 2025
b19d28b
Spotless
currantw Feb 7, 2025
766fd65
Spotless
currantw Feb 7, 2025
b51affc
Additional review comments, including move constants to `ExprValueUti…
currantw Feb 7, 2025
4542d11
Review comments - update tests for exception msg
currantw Feb 7, 2025
2f8f136
Review comments - simplify `FlattenOperator.flattenExprValueAtPath`.
currantw Feb 7, 2025
4e6da8d
Change braces in documentation.
currantw Feb 10, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
163 changes: 132 additions & 31 deletions core/src/main/java/org/opensearch/sql/analysis/Analyzer.java
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,9 @@
package org.opensearch.sql.analysis;

import static org.opensearch.sql.analysis.DataSourceSchemaIdentifierNameResolver.DEFAULT_DATASOURCE_NAME;
import static org.opensearch.sql.analysis.symbol.Namespace.FIELD_NAME;
import static org.opensearch.sql.analysis.symbol.Namespace.HIDDEN_FIELD_NAME;
import static org.opensearch.sql.analysis.symbol.Namespace.INDEX_NAME;
import static org.opensearch.sql.ast.tree.Sort.NullOrder.NULL_FIRST;
import static org.opensearch.sql.ast.tree.Sort.NullOrder.NULL_LAST;
import static org.opensearch.sql.ast.tree.Sort.SortOrder.ASC;
Expand All @@ -26,21 +29,21 @@
import com.google.common.collect.ImmutableSet;
import java.util.ArrayList;
import java.util.Collections;
import java.util.HashMap;
import java.util.List;
import java.util.Map;
import java.util.Objects;
import java.util.Optional;
import java.util.stream.Collectors;
import org.apache.commons.lang3.tuple.ImmutablePair;
import org.apache.commons.lang3.tuple.Pair;
import org.opensearch.sql.DataSourceSchemaName;
import org.opensearch.sql.analysis.symbol.Namespace;
import org.opensearch.sql.analysis.symbol.Symbol;
import org.opensearch.sql.ast.AbstractNodeVisitor;
import org.opensearch.sql.ast.expression.Argument;
import org.opensearch.sql.ast.expression.Field;
import org.opensearch.sql.ast.expression.Let;
import org.opensearch.sql.ast.expression.Literal;
import org.opensearch.sql.ast.expression.Map;
import org.opensearch.sql.ast.expression.ParseMethod;
import org.opensearch.sql.ast.expression.QualifiedName;
import org.opensearch.sql.ast.expression.UnresolvedExpression;
Expand All @@ -52,6 +55,7 @@
import org.opensearch.sql.ast.tree.FetchCursor;
import org.opensearch.sql.ast.tree.FillNull;
import org.opensearch.sql.ast.tree.Filter;
import org.opensearch.sql.ast.tree.Flatten;
import org.opensearch.sql.ast.tree.Head;
import org.opensearch.sql.ast.tree.Kmeans;
import org.opensearch.sql.ast.tree.Limit;
Expand All @@ -70,8 +74,11 @@
import org.opensearch.sql.ast.tree.UnresolvedPlan;
import org.opensearch.sql.ast.tree.Values;
import org.opensearch.sql.common.antlr.SyntaxCheckException;
import org.opensearch.sql.common.utils.StringUtils;
import org.opensearch.sql.data.model.ExprMissingValue;
import org.opensearch.sql.data.model.ExprValueUtils;
import org.opensearch.sql.data.type.ExprCoreType;
import org.opensearch.sql.data.type.ExprType;
import org.opensearch.sql.datasource.DataSourceService;
import org.opensearch.sql.exception.SemanticCheckException;
import org.opensearch.sql.expression.DSL;
Expand All @@ -94,6 +101,7 @@
import org.opensearch.sql.planner.logical.LogicalEval;
import org.opensearch.sql.planner.logical.LogicalFetchCursor;
import org.opensearch.sql.planner.logical.LogicalFilter;
import org.opensearch.sql.planner.logical.LogicalFlatten;
import org.opensearch.sql.planner.logical.LogicalLimit;
import org.opensearch.sql.planner.logical.LogicalML;
import org.opensearch.sql.planner.logical.LogicalMLCommons;
Expand Down Expand Up @@ -165,16 +173,15 @@ public LogicalPlan visitRelation(Relation node, AnalysisContext context) {
dataSourceSchemaIdentifierNameResolver.getSchemaName()),
dataSourceSchemaIdentifierNameResolver.getIdentifierName());
}
table.getFieldTypes().forEach((k, v) -> curEnv.define(new Symbol(Namespace.FIELD_NAME, k), v));
table.getFieldTypes().forEach((k, v) -> curEnv.define(new Symbol(FIELD_NAME, k), v));
table
.getReservedFieldTypes()
.forEach((k, v) -> curEnv.define(new Symbol(Namespace.HIDDEN_FIELD_NAME, k), v));
.forEach((k, v) -> curEnv.define(new Symbol(HIDDEN_FIELD_NAME, k), v));

// Put index name or its alias in index namespace on type environment so qualifier
// can be removed when analyzing qualified name. The value (expr type) here doesn't matter.
curEnv.define(
new Symbol(Namespace.INDEX_NAME, (node.getAlias() == null) ? tableName : node.getAlias()),
STRUCT);
new Symbol(INDEX_NAME, (node.getAlias() == null) ? tableName : node.getAlias()), STRUCT);

return new LogicalRelation(tableName, table);
}
Expand All @@ -187,7 +194,7 @@ public LogicalPlan visitRelationSubquery(RelationSubquery node, AnalysisContext

// Put subquery alias in index namespace so the qualifier can be removed
// when analyzing qualified name in the subquery layer
curEnv.define(new Symbol(Namespace.INDEX_NAME, node.getAliasAsTableName()), STRUCT);
curEnv.define(new Symbol(INDEX_NAME, node.getAliasAsTableName()), STRUCT);
return subquery;
}

Expand Down Expand Up @@ -219,14 +226,12 @@ public LogicalPlan visitTableFunction(TableFunction node, AnalysisContext contex
context.push();
TypeEnvironment curEnv = context.peek();
Table table = tableFunctionImplementation.applyArguments();
table.getFieldTypes().forEach((k, v) -> curEnv.define(new Symbol(Namespace.FIELD_NAME, k), v));
table.getFieldTypes().forEach((k, v) -> curEnv.define(new Symbol(FIELD_NAME, k), v));
table
.getReservedFieldTypes()
.forEach((k, v) -> curEnv.define(new Symbol(Namespace.HIDDEN_FIELD_NAME, k), v));
.forEach((k, v) -> curEnv.define(new Symbol(HIDDEN_FIELD_NAME, k), v));
curEnv.define(
new Symbol(
Namespace.INDEX_NAME, dataSourceSchemaIdentifierNameResolver.getIdentifierName()),
STRUCT);
new Symbol(INDEX_NAME, dataSourceSchemaIdentifierNameResolver.getIdentifierName()), STRUCT);
return new LogicalRelation(
dataSourceSchemaIdentifierNameResolver.getIdentifierName(),
tableFunctionImplementation.applyArguments());
Expand Down Expand Up @@ -277,7 +282,7 @@ public LogicalPlan visitRename(Rename node, AnalysisContext context) {
LogicalPlan child = node.getChild().get(0).accept(this, context);
ImmutableMap.Builder<ReferenceExpression, ReferenceExpression> renameMapBuilder =
new ImmutableMap.Builder<>();
for (Map renameMap : node.getRenameList()) {
for (org.opensearch.sql.ast.expression.Map renameMap : node.getRenameList()) {
currantw marked this conversation as resolved.
Show resolved Hide resolved
Expression origin = expressionAnalyzer.analyze(renameMap.getOrigin(), context);
// We should define the new target field in the context instead of analyze it.
if (renameMap.getTarget() instanceof Field) {
Expand Down Expand Up @@ -328,11 +333,9 @@ public LogicalPlan visitAggregation(Aggregation node, AnalysisContext context) {
TypeEnvironment newEnv = context.peek();
aggregators.forEach(
aggregator ->
newEnv.define(
new Symbol(Namespace.FIELD_NAME, aggregator.getName()), aggregator.type()));
newEnv.define(new Symbol(FIELD_NAME, aggregator.getName()), aggregator.type()));
groupBys.forEach(
group ->
newEnv.define(new Symbol(Namespace.FIELD_NAME, group.getNameOrAlias()), group.type()));
group -> newEnv.define(new Symbol(FIELD_NAME, group.getNameOrAlias()), group.type()));
return new LogicalAggregation(child, aggregators, groupBys);
}

Expand All @@ -357,9 +360,8 @@ public LogicalPlan visitRareTopN(RareTopN node, AnalysisContext context) {
context.push();
TypeEnvironment newEnv = context.peek();
groupBys.forEach(
group -> newEnv.define(new Symbol(Namespace.FIELD_NAME, group.toString()), group.type()));
fields.forEach(
field -> newEnv.define(new Symbol(Namespace.FIELD_NAME, field.toString()), field.type()));
group -> newEnv.define(new Symbol(FIELD_NAME, group.toString()), group.type()));
fields.forEach(field -> newEnv.define(new Symbol(FIELD_NAME, field.toString()), field.type()));

List<Argument> options = node.getNoOfResults();
Integer noOfResults = (Integer) options.get(0).getValue().getValue();
Expand Down Expand Up @@ -425,8 +427,7 @@ public LogicalPlan visitProject(Project node, AnalysisContext context) {
context.push();
TypeEnvironment newEnv = context.peek();
namedExpressions.forEach(
expr ->
newEnv.define(new Symbol(Namespace.FIELD_NAME, expr.getNameOrAlias()), expr.type()));
expr -> newEnv.define(new Symbol(FIELD_NAME, expr.getNameOrAlias()), expr.type()));
List<NamedExpression> namedParseExpressions = context.getNamedParseExpressions();
return new LogicalProject(child, namedExpressions, namedParseExpressions);
}
Expand All @@ -448,6 +449,107 @@ public LogicalPlan visitEval(Eval node, AnalysisContext context) {
return new LogicalEval(child, expressionsBuilder.build());
}

/**
* Builds and returns a {@link org.opensearch.sql.planner.logical.LogicalFlatten} corresponding to
* the given flatten node, and adds the new fields to the current type environment.
*
* <p><b>Example</b>
*
* <p>Input Data:
*
* <pre>
* {
* struct: {
* integer: 0,
* nested_struct: { string: "value" }
* }
* }
* </pre>
*
* Query 1: <code>flatten struct</code>
*
* <pre>
* {
* struct: {
* integer: 0,
* nested_struct: { string: "value" }
* },
* integer: 0,
* nested_struct: { string: "value" }
* }
* </pre>
*
* Query 2: <code>flatten struct.nested_struct</code>
*
* <pre>
* {
* struct: {
* integer: 0,
* nested_struct: { string: "value" },
* string: "value"
* }
* }
* </pre>
*/
@Override
public LogicalPlan visitFlatten(Flatten node, AnalysisContext context) {
LogicalPlan child = node.getChild().getFirst().accept(this, context);

ReferenceExpression fieldExpr =
(ReferenceExpression) expressionAnalyzer.analyze(node.getField(), context);
String fieldName = fieldExpr.getAttr();

// [A] Determine fields to add
// ---------------------------

// Iterate over all the fields defined in the type environment. Find all those that are
// descended from field that is being flattened, and determine the new paths to add. When
// determining the new paths, we need to preserve the portion of the path corresponding to the
// flattened field's parent, if one exists, in order to support flattening nested structs.

TypeEnvironment env = context.peek();
Map<String, ExprType> fieldsMap = env.lookupAllTupleFields(FIELD_NAME);

final String fieldParentPathPrefix =
fieldName.contains(ExprValueUtils.QUALIFIED_NAME_SEPARATOR)
? fieldName.substring(0, fieldName.lastIndexOf(ExprValueUtils.QUALIFIED_NAME_SEPARATOR))
+ ExprValueUtils.QUALIFIED_NAME_SEPARATOR
: "";

// Get entries for paths that are descended from the flattened field.
final String fieldDescendantPathPrefix = fieldName + ExprValueUtils.QUALIFIED_NAME_SEPARATOR;
List<Map.Entry<String, ExprType>> fieldDescendantEntries =
fieldsMap.entrySet().stream()
.filter(e -> e.getKey().startsWith(fieldDescendantPathPrefix))
.toList();

// Get fields to add from descendant entries.
Map<String, ExprType> addFieldsMap = new HashMap<>();
for (Map.Entry<String, ExprType> entry : fieldDescendantEntries) {
String newPath =
fieldParentPathPrefix + entry.getKey().substring(fieldDescendantPathPrefix.length());
addFieldsMap.put(newPath, entry.getValue());
}

// [B] Add new fields to type environment
// --------------------------------------

// Verify that new fields do not overwrite an existing field.
List<String> duplicateFieldNames =
addFieldsMap.keySet().stream().filter(fieldsMap::containsKey).toList();

if (!duplicateFieldNames.isEmpty()) {
throw new SemanticCheckException(
StringUtils.format(
"Flatten command cannot overwrite fields: %s",
String.join(", ", duplicateFieldNames)));
}

addFieldsMap.forEach((name, type) -> env.define(DSL.ref(name, type)));

return new LogicalFlatten(child, fieldExpr);
}

/** Build {@link ParseExpression} to context and skip to child nodes. */
@Override
public LogicalPlan visitParse(Parse node, AnalysisContext context) {
Expand All @@ -465,7 +567,7 @@ public LogicalPlan visitParse(Parse node, AnalysisContext context) {
ParseExpression expr =
ParseUtils.createParseExpression(
parseMethod, sourceField, patternExpression, DSL.literal(group));
curEnv.define(new Symbol(Namespace.FIELD_NAME, group), expr.type());
curEnv.define(new Symbol(FIELD_NAME, group), expr.type());
context.getNamedParseExpressions().add(new NamedExpression(group, expr));
});
return child;
Expand Down Expand Up @@ -524,7 +626,7 @@ public LogicalPlan visitKmeans(Kmeans node, AnalysisContext context) {
java.util.Map<String, Literal> options = node.getArguments();

TypeEnvironment currentEnv = context.peek();
currentEnv.define(new Symbol(Namespace.FIELD_NAME, "ClusterID"), ExprCoreType.INTEGER);
currentEnv.define(new Symbol(FIELD_NAME, "ClusterID"), ExprCoreType.INTEGER);

return new LogicalMLCommons(child, "kmeans", options);
}
Expand All @@ -537,13 +639,13 @@ public LogicalPlan visitAD(AD node, AnalysisContext context) {

TypeEnvironment currentEnv = context.peek();

currentEnv.define(new Symbol(Namespace.FIELD_NAME, RCF_SCORE), ExprCoreType.DOUBLE);
currentEnv.define(new Symbol(FIELD_NAME, RCF_SCORE), ExprCoreType.DOUBLE);
if (Objects.isNull(node.getArguments().get(TIME_FIELD))) {
currentEnv.define(new Symbol(Namespace.FIELD_NAME, RCF_ANOMALOUS), ExprCoreType.BOOLEAN);
currentEnv.define(new Symbol(FIELD_NAME, RCF_ANOMALOUS), ExprCoreType.BOOLEAN);
} else {
currentEnv.define(new Symbol(Namespace.FIELD_NAME, RCF_ANOMALY_GRADE), ExprCoreType.DOUBLE);
currentEnv.define(new Symbol(FIELD_NAME, RCF_ANOMALY_GRADE), ExprCoreType.DOUBLE);
currentEnv.define(
new Symbol(Namespace.FIELD_NAME, (String) node.getArguments().get(TIME_FIELD).getValue()),
new Symbol(FIELD_NAME, (String) node.getArguments().get(TIME_FIELD).getValue()),
ExprCoreType.TIMESTAMP);
}
return new LogicalAD(child, options);
Expand Down Expand Up @@ -578,8 +680,7 @@ public LogicalPlan visitML(ML node, AnalysisContext context) {
LogicalPlan child = node.getChild().get(0).accept(this, context);
TypeEnvironment currentEnv = context.peek();
node.getOutputSchema(currentEnv).entrySet().stream()
.forEach(
v -> currentEnv.define(new Symbol(Namespace.FIELD_NAME, v.getKey()), v.getValue()));
.forEach(v -> currentEnv.define(new Symbol(FIELD_NAME, v.getKey()), v.getValue()));

return new LogicalML(child, node.getArguments());
}
Expand Down Expand Up @@ -620,7 +721,7 @@ public LogicalPlan visitTrendline(Trendline node, AnalysisContext context) {
resolvedField.type().typeName()));
}
}
currEnv.define(new Symbol(Namespace.FIELD_NAME, computation.getAlias()), averageType);
currEnv.define(new Symbol(FIELD_NAME, computation.getAlias()), averageType);
computationsAndTypes.add(Pair.of(computation, averageType));
});

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
package org.opensearch.sql.analysis;

import java.util.List;
import org.opensearch.sql.data.model.ExprValueUtils;
import org.opensearch.sql.datasource.DataSourceService;

public class DataSourceSchemaIdentifierNameResolver {
Expand All @@ -21,8 +22,6 @@ public class DataSourceSchemaIdentifierNameResolver {
private final String identifierName;
private final DataSourceService dataSourceService;

private static final String DOT = ".";
currantw marked this conversation as resolved.
Show resolved Hide resolved

/**
* Data model for capturing dataSourceName, schema and identifier from fully qualifiedName. In the
* current state, it is used to capture DataSourceSchemaTable name and DataSourceSchemaFunction in
Expand All @@ -35,7 +34,7 @@ public DataSourceSchemaIdentifierNameResolver(
DataSourceService dataSourceService, List<String> parts) {
this.dataSourceService = dataSourceService;
List<String> remainingParts = captureSchemaName(captureDataSourceName(parts));
identifierName = String.join(DOT, remainingParts);
identifierName = String.join(ExprValueUtils.QUALIFIED_NAME_SEPARATOR, remainingParts);
}

public String getIdentifierName() {
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -47,6 +47,7 @@
import org.opensearch.sql.ast.tree.FetchCursor;
import org.opensearch.sql.ast.tree.FillNull;
import org.opensearch.sql.ast.tree.Filter;
import org.opensearch.sql.ast.tree.Flatten;
import org.opensearch.sql.ast.tree.Head;
import org.opensearch.sql.ast.tree.Kmeans;
import org.opensearch.sql.ast.tree.Limit;
Expand Down Expand Up @@ -107,6 +108,10 @@ public T visitTableFunction(TableFunction node, C context) {
return visitChildren(node, context);
}

public T visitFlatten(Flatten node, C context) {
return visitChildren(node, context);
}

public T visitFilter(Filter node, C context) {
return visitChildren(node, context);
}
Expand Down
5 changes: 5 additions & 0 deletions core/src/main/java/org/opensearch/sql/ast/dsl/AstDSL.java
Original file line number Diff line number Diff line change
Expand Up @@ -51,6 +51,7 @@
import org.opensearch.sql.ast.tree.Eval;
import org.opensearch.sql.ast.tree.FillNull;
import org.opensearch.sql.ast.tree.Filter;
import org.opensearch.sql.ast.tree.Flatten;
import org.opensearch.sql.ast.tree.Head;
import org.opensearch.sql.ast.tree.Limit;
import org.opensearch.sql.ast.tree.Parse;
Expand Down Expand Up @@ -104,6 +105,10 @@ public static Eval eval(UnresolvedPlan input, Let... projectList) {
return new Eval(Arrays.asList(projectList)).attach(input);
}

public Flatten flatten(UnresolvedPlan input, Field field) {
return new Flatten(field).attach(input);
}

public static UnresolvedPlan projectWithArg(
UnresolvedPlan input, List<Argument> argList, UnresolvedExpression... projectList) {
return new Project(Arrays.asList(projectList), argList).attach(input);
Expand Down
2 changes: 0 additions & 2 deletions core/src/main/java/org/opensearch/sql/ast/tree/Eval.java
Original file line number Diff line number Diff line change
Expand Up @@ -10,14 +10,12 @@
import lombok.EqualsAndHashCode;
import lombok.Getter;
import lombok.RequiredArgsConstructor;
import lombok.Setter;
currantw marked this conversation as resolved.
Show resolved Hide resolved
import lombok.ToString;
import org.opensearch.sql.ast.AbstractNodeVisitor;
import org.opensearch.sql.ast.expression.Let;

/** AST node represent Eval operation. */
@Getter
@Setter
currantw marked this conversation as resolved.
Show resolved Hide resolved
@ToString
@EqualsAndHashCode(callSuper = false)
@RequiredArgsConstructor
Expand Down
Loading
Loading