-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Too slow #4
Comments
So what's the problem here? Are the computations too slow or is it just generating test data that's deathly slow? If the plan is to update the I don't think we really need very much support for doing linear computations (do we?), but if so we might want to check out some of the pure-Haskell options, like Edward Kmett's |
On Mon, Oct 13, 2014 at 8:43 PM, Eric Pashman [email protected]
|
I pushed some initial work to re-write the Take a look at it if you get a chance. The code is broken in a couple of ways. I think the "no instance" errors on line 67 can be avoided by using It's the same old thing: If there is a way to get this to work, it might be by getting rid of the polymorphism, including in the definition of Anyway, I wonder whether there is way to do this that is not completely bonkers. ... |
I re-wrote the By the way, do you have much experience using |
I'll take a look. We want the unboxed version though. On Thu, Oct 16, 2014 at 3:46 PM, Eric Pashman [email protected]
|
I just pushed everything into a new branch, We're going to run into the the same problem with too little polymorphism, I imagine, if we use unboxed vectors. I don't have a good grasp on why this module is so slow, but I figured some stream fusion was worth a shot. I think we get that from the |
I think we need to drop ad to use unboxed vectors, if that has not already On Thu, Oct 16, 2014 at 4:19 PM, Eric Pashman [email protected]
|
Wait, I mean I'm basically sure we have to drop ad I mean On Thu, Oct 16, 2014 at 4:38 PM, Jonathan Fischoff <
|
You think so? We can definitely make everything work using If we're going to reimplement everything from the ground up around another way of computing gradients, it probably makes sense to keep the existing code around as an option for low-dimensional stuff. The Still probably makes sense to see where stream fusion gets us.
|
I haven't looked at ad for a bit, but I thought the traversable constraint On Thu, Oct 16, 2014 at 5:18 PM, Eric Pashman [email protected]
|
I think you're right. I had in mind another problem, the But I meant that things can (I think) be made to work, at least in some sense, using boxed vectors or the I really don't have any good ideas for how to handle the high-dimensional linear case. I'm just saying that it might be useful to have a separate implementation there while retaining what we have for non-linear and low-dim linear cases. |
I updated the
I ran the tests single-threaded and they pass in about ten minutes on my machine, a MacBook Air with a rinky-dink processor. I guesstimate that's about two orders of magnitude faster than the tests ran previously. (When I let them run to completion a week ago, it took overnight plus several hours.) Run everything and see what you think. There's still a lot of ugly nonsense going on in |
Actually, the tests pass in about three minutes on my machine. So, yes, this is at least a couple of orders of magnitude faster than using lists. |
Oh awesome On Fri, Oct 17, 2014 at 2:06 PM, Eric Pashman [email protected]
|
The I'll wait for your OK to merge into Anyway, take a look. And let me know whether you think we can really get away with using the |
I'm out today, I'll take a look tomorrow. thanks On Sat, Oct 18, 2014 at 12:57 PM, Eric Pashman [email protected]
|
Yep, no rush. Just want to make sure I get your thoughts on the linear stuff at some point. The use-cases I'm familiar with are all pretty low-dim, so please don't imagine I know what will work for image processing or whatever. :P
|
Doing numerical process with linked lists of box numbers is silly.
Linear should be much faster. I don't think I need data to solve it.
The text was updated successfully, but these errors were encountered: