Replies: 2 comments 8 replies
-
I see the conundrum of not being able to debug this. It seems potentially valuable extend the serving API with an endpoint that returns the final dataframe from parsed form inputs that would be fed to the model on a "real" predict request. @noahlh would it be difficult for you to reproduce the behavior using master (see step 4 of the contributing guide)? Once you can reproduce this, I would try adding print statements in |
Beta Was this translation helpful? Give feedback.
-
I think that #2299 should fix this. @noahlh If you see this before we merge, you can try out the PR branch and let me know if this fixes things on your side. Thanks for catching this bug, cheers! |
Beta Was this translation helpful? Give feedback.
-
Posting here vs. in Issues since I'm presuming this is a problem on my end and not necessarily a ludwig issue, but if I uncover an issue I will write it up separately.
The problem I'm facing: I've trained a model and run
ludwig predict
on the original data set (which was in JSON format). Predictions are looking good.I'm now trying to replicate those exact predictions via
ludwig serve
and i'm getting completely different results. I'm starting with a single prediction.I've setup the query exactly as-documented (multi-part form) and I am getting a response, so all of my input fields are there and that's being recognized, but the prediction is just completely out of line.
This is for a regression task, and the columns are a mix of number, text, and category types.
Here's an example of one line in my json used to train & predict the model:
And I'm submitting my curl request as follows:
I'm assuming the core issue is that the form data being submitted is being parsed and preprocessed differently via
serve
than when being imported via JSON and running through thepredict
pipeline, but I'm stuck at how to debug this since I don't have any clue where the data differences are.Does anyone have any ideas?
Beta Was this translation helpful? Give feedback.
All reactions