-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TRACKER] Port to C++ for Godot Engine #8
Comments
This plan is almost certainly not going to work.
The Kusudama visualizer in the youtube link from my reply in #7 is primarily shader-based, if you'd like, I can share the code for that. Rolling your own visualizer would require understanding the math behind Kusudamas, which might not be entirely obvious to infer from the code Java code (and I've yet to write a paper on them).
Your peer's concerns about "Unnaturalness" are well founded, but they aren't a limitation of FABRIK per se. They are more a problem resulting from a combination of the following three factors.
This library currently accounts for factor 1 (the solver can take the orientation of effectors into account -- though, at some cost to speed and convergence stability in edge cases) and factor 2 (via Kusudama constraints). For your particular use case, factor 3 is both the most important and I think the most trivial to handle. If done correctly, a solution to 3 would (for your use case) almost completely obviate concerns regarding 1 and 2. Depending on whether you're using this library as a reference, or developing a full port, I can either implement a solution to 3 in this library, or offer some suggestions as to a viable approach, and let you figure out the details as relevant to your specific implementation. But the very broad idea is:
|
Thank you for your detailed reply. If you could implement this in this library, I could reference how it is implemented. The shader code would be useful too. |
I've done some cleanup of the repository, and removed the binary library from the repository. There's also some updates to the readme. |
I've just updated the library with a massively improved algorithm. As far as I can tell, it sort of "just works" and probably won't require any of the preprocessing suggestions above (check the video in the readme). I've also removed all of the old solvers in favor of this much better solver. If you're still porting this, let me know if you need any help. |
I am still porting, but I was distracted by other tasks. I'll have to review the code. |
Is ConstraintExample_Kusudama.pde compatible with the new api? |
You mentioned you could provide help with the cone based constraint shader. I'm having trouble understanding that approach. |
My goal is to create a sample with the three or more sensors to affect a skeleton I've defined in a c++ engine. This requires visualization and editing of constraints. |
I'm away from my dev machine at the moment, but I'll be home tonight and will upload the shader then.
In that case, I'll also update the Processing extension of the library available at https://github.com/EGjoni/Everything-Will-Be-IK-Processing. The latest version of which will include a visualizer for Kusudamas (and a number of other useful examples for generally getting the most out of the library). If you're willing to put in a little extra work to port the JSON based armature Saver and Loader, it will likely help you quite a bit in ensuring the armature you create behaves the same way in your C++ port as it does in Java. Anyway, if you'd like to have something to chew on in the meantime, the basic idea behind Kusudamas is this: https://i.imgur.com/ERuipqk.png Except the regions are on a sphere (so the straight lines end up being curves). In effect, this means you can add a sequence of LimitCones, and the kusudama will treat them as specifying a "path" of allowable space on the sphere that gracefully encompasses the region between one cone and the next. It also means that if you specify just one cone, a kusudama behaves identically to traditional cone-constrained ball-and-socket joints.
I'm not 100% sure. But on the topic of compatibility, I've made a number of further optimizations (>400% speed gains) and improvements / feature additions since the last pull to the master branch, So if you're starting the port, please use the "modularization" branch to make sure you benefit from the latest changes. |
What is the difference between double and single precision code? Is it more precise? Faster? |
Thank you for your help. The processing example code makes it easier for me to understand the implementation. |
Hey, sorry. I'm kind of backlogged with work until Friday. So you'll have to wait until then for the more complete Processing examples (there'll be some very useful stuff for your usecase demoed in there that you'll want to look at when I upload it).
The double precision version is a little more precise and in Java also a little faster due to how Java's trigonometric operations are handled (specifically, Java's trig functions are on doubles, and so the trigonometric operations for the single precision format have the same cost as the double precision format + the cost of casting from double to float). The reason for the single precision library is pragmatic. Specifically
Sorry again for the delay. Here is the shader code in the meantime if it helps. #ifdef GL_ES
precision mediump float;
precision mediump int;
#endif
varying vec4 vertColor;
//Model space normal direction of the current fragment
//since we're on a sphere, this is literally just the fragment's position in
//modelspace
varying vec3 vertNormal;
//This shader can display up to 10 cones (represented by 30 4d vectors [tangentCone, limitCone, tangentCone])
// alphachannel represents radius, rgb channels represent xyz coordinates of
// the cone direction vector in model space
uniform vec4 coneSequence[30];
uniform int coneCount;
//Make this "true" for sceendoor transparency (randomly discarding fragments)
//so that you can blur the result in another pass. Otherwise make it
//false for a solid shell.
uniform bool multiPass;
//Following three varyings are
//Only used for fake lighting.
//Not conceptually relevant
varying vec3 vertWorldNormal;
varying vec3 vertLightDir;
varying vec4 vertWorldPos;
///NOISE FUNCTIONS FOR FANCY TRANSPARENCY RENDERING
float hash( uint n ) { // from https://www.shadertoy.com/view/llGSzw Base: Hugo Elias. ToFloat: http://iquilezles.org/www/articles/sfrand/sfrand.htm
n = (n << 13U) ^ n;
n = n * (n * n * 15731U + 789221U) + 1376312589U;
return uintBitsToFloat( (n>>9U) | 0x3f800000U ) - 1.;
}
float noise(vec2 U) {
return hash(uint(U.x+5000.0*U.y));
}
bool randBit(vec2 U) {
float dist2 = 1.0;
return 0.5 < (noise(U) * 4. -(noise(U+vec2(dist2,0.))+noise(U+vec2(0.,dist2))+noise(U-vec2(0.,dist2))+noise(U-vec2(dist2,0.))) + 0.5);
}
///END OF NOISE FUNCTIONS FOR FANCY TRANSPARENCY RENDERING.
//Determine if the fragment is in the path between two limitcones.
//this should only be relied on after verifying that the fragment
//isn't in the two limitcones its checking between.
bool isInInterConePath(in vec3 normalDir, in vec4 tangent1, in vec4 cone1, in vec4 tangent2, in vec4 cone2) {
vec3 c1xc2 = cross(cone1.xyz, cone2.xyz);
float c1c2dir = dot(normalDir, c1xc2);
if(c1c2dir < 0.0) {
vec3 c1xt1 = cross(cone1.xyz, tangent1.xyz);
vec3 t1xc2 = cross(tangent1.xyz, cone2.xyz);
float c1t1dir = dot(normalDir, c1xt1);
float t1c2dir = dot(normalDir, t1xc2);
return (c1t1dir > 0.0 && t1c2dir > 0.0);
}else {
vec3 t2xc1 = cross(tangent2.xyz, cone1.xyz);
vec3 c2xt2 = cross(cone2.xyz, tangent2.xyz);
float t2c1dir = dot(normalDir, t2xc1);
float c2t2dir = dot(normalDir, c2xt2);
return (c2t2dir > 0.0 && t2c1dir > 0.0);
}
return false;
}
//determines the current draw condition based on the desired draw condition in the setTo argument
// -3 = disallowed entirely;
// -2 = disallowed and on tangentCone boundary
// -1 = disallowed and on controlCone boundary
// 0 = allowed and empty;
// 1 = allowed and on controlCone boundary
// 2 = allowed and on tangentCone boundary
int getAllowabilityCondition(in int currentCondition, in int setTo) {
if((currentCondition == -1 || currentCondition == -2)
&& setTo >= 0) {
return currentCondition *= -1;
} else if(currentCondition == 0 && (setTo == -1 || setTo == -2)) {
return setTo *=-2;
}
return max(currentCondition, setTo);
}
//returns 1 if normalDir is beyond (cone.a) radians from cone.rgb
//returns 0 if normalDir is within (cone.a + boundaryWidth) radians from cone.rgb
//return -1 if normalDir is less than (cone.a) radians from cone.rgb
int isInCone(in vec3 normalDir, in vec4 cone, in float boundaryWidth) {
float arcDistToCone = acos(dot(normalDir, cone.rgb));
if(arcDistToCone > (cone.a+(boundaryWidth/2.))) {
return 1;
}
if(arcDistToCone < cone.a-(boundaryWidth/2.)) {
return -1;
}
return 0;
}
//returns a color corresponding to the allowability of this region, or otherwise the boundaries corresponding
//to various cones and tangentCone
vec4 colorAllowed(in vec3 normalDir, in int coneCount, in float boundaryWidth) {
normalDir = normalize(normalDir);
int currentCondition = -3;
if(coneCount == 1) {
vec4 cone = coneSequence[0];
int inCone = isInCone(normalDir, cone, boundaryWidth);
inCone = inCone == 0 ? -1 : inCone < 0 ? 0 : -3;
currentCondition = getAllowabilityCondition(currentCondition, inCone);
} else {
for(int i=0; i<coneCount-1; i+=3) {
int idx = i*3;
vec4 cone1 = coneSequence[idx];
vec4 tangent1 = coneSequence[idx+1];
vec4 tangent2 = coneSequence[idx+2];
vec4 cone2 = coneSequence[idx+3];
int inCone1 = isInCone(normalDir, cone1, boundaryWidth);
inCone1 = inCone1 == 0 ? -1 : inCone1 < 0 ? 0 : -3;
currentCondition = getAllowabilityCondition(currentCondition, inCone1);
int inCone2 = isInCone(normalDir, cone2, boundaryWidth);
inCone2 = inCone2 == 0 ? -1 : inCone2 < 0 ? 0 : -3;
currentCondition = getAllowabilityCondition(currentCondition, inCone2);
int inTan1 = isInCone(normalDir, tangent1, boundaryWidth);
int inTan2 = isInCone(normalDir, tangent2, boundaryWidth);
if( inTan1 < 1. || inTan2 < 1.) {
inTan1 = inTan1 == 0 ? -2 : -3;
currentCondition = getAllowabilityCondition(currentCondition, inTan1);
inTan2 = inTan2 == 0 ? -2 : -3;
currentCondition = getAllowabilityCondition(currentCondition, inTan2);
} else {
bool inIntercone = isInInterConePath(normalDir, tangent1, cone1, tangent2, cone2);
int interconeCondition = inIntercone ? 0 : -3;
currentCondition = getAllowabilityCondition(currentCondition, interconeCondition);
}
}
}
if(multiPass && (currentCondition == -3 || currentCondition > 0)) {
/////////
//CODE FOR FANCY BLURRED TRANSPARENCY.
//NOT OTHERWISE CONCEPTUALLY RELEVANT TO
//TO VISUALIZATION
////////
vec3 randDir = vec3(normalDir.x * noise(normalDir.xy)/50.0, normalDir.y * noise(normalDir.yz)/50.0, normalDir.z * noise(normalDir.zx)/50.0);
randDir = normalDir;
float zAdd = abs(vertWorldPos.z);
float lon = atan(randDir.x/randDir.z) + 3.14159265/2.0;
float lat = atan(randDir.y/randDir.x) + 3.14159265/2.0;
bool latDraw = randBit(vec2(lat, lon));//mod(lat, 0.005) < 0.00499;
bool lonDraw = randBit(vec2(lon, lat));//mod(lon, 0.005) < 0.00499;
if(randBit(vec2(lon, lat))) {
result = vec4(0.0,0.0,0.0,0.0);
}
////////
//END CODE FOR FANCY BLURRED TRANSPARENCY
///////
} else if (currentCondition != 0) {
float onTanBoundary = abs(currentCondition) == 2 ? 0.3 : 0.0;
float onConeBoundary = abs(currentCondition) == 1 ? 0.3 : 0.0;
//return distCol;
result += vec4(0.0, onConeBoundary, onTanBoundary, 1.0);
} else {
discard;
}
return result;
}
void main() {
vec3 normalDir = normalize(vertNormal); // the vertex normal in Model Space.
float lightScalar = dot(vertLightDir, vec3(0.5,-1.,0.5));
lightScalar *= lightScalar*lightScalar;
vec4 colorAllowed = colorAllowed(normalDir, coneCount, 0.02);
if(colorAllowed.a == 0.0)
discard;
colorAllowed += (colorAllowed + fwidth(colorAllowed));
colorAllowed /= 2.0;
vec3 lightCol = vec3(1.0,0.8,0.0);
float gain = vertWorldNormal.z < 0 ? -0.3 : 0.5;
colorAllowed.rgb = (colorAllowed.rgb + lightCol*(lightScalar + gain)) / 2.;
vec4 specCol = vec4(1.0, 1.0, 0.6, colorAllowed.a);
colorAllowed = colorAllowed.g > 0.8 ? colorAllowed+specCol : colorAllowed;
gl_FragColor = colorAllowed;
} |
Additionally, do note that if you're making a single precision port, you should reference the single precision implementation, specifically with regard to the QCP.java class, as the single precision version of QCP includes some modifications for numerical stability which the double precision version |
My plan is to make a single precision port, that has an alias for using double precision. Thanks for the insight. |
Alright, I just pushed the latest code (with performance and stability improvements) to the Particularly, you'll most want to take a look at FullExample, which programmatically specifies a humanoid armature, and the Humanoid_Holding_Item demo, which loads an armature from JSON, and demos simultaneous head pelvis and wrist tracking. |
I'm away this weekend, but I'm starting to get back into this code. For Godot Engine, I have gltf2 scene export. In theory, anything that is in a scene can be exported as possiblity animated geometry. |
Interesting. I was unaware of gtlf2 as a thing that exists. It should be moderately straightforward to convert from EWBIKs save/load format to gltf2. The formats are already extremely similar, with the exception that EWBIK's format allows for skew/shear, and EWBIKs quaternions are saved in [w,x,y,z] order (as opposed to gltf2's [x,y,z,w]). And the exception that EWBIK's format references nodes by identity hash, instead of by array index. But I'm not sure if it would be more straightforward than just porting the save/load code already provided. |
The idea is that this code is included into Godot ported into C++. The animations are captured and constrained to the sensor positions. The animations can be exported again from the software as something Blender can read. My plan is to port the code using the internal apis. Exporting the GLTF2 format is notable because it can be used to export scenes from your projects though. |
Tantalizing! Keep me updated on any interesting progress you make or approaches you take or snags you hit, if you could. It would be immensely helpful for figuring out the least painful ways to structure future versions of the library. |
(Also, the algorithm isn't technically CCD. Something like "Cyclic Mean Deviation Descent" [CMDD] might be more accurate, give or take a few letters.) |
I renamed the class "Cyclic Mean Deviation Descent" [CMDD]. Also, the constraints are on the entire skeleton rather than a bone chain right? |
It should be possible in theory to use this for skeletal animation re-targeting too. [Make the scale relatively match] [Set both skeletons to a standard pose, let's say an "A" pose] [Reset the original skeleton and animations to be based on the "A" pose] [Map original skeleton bone to the new skeleton bone] For each key-frame in the original skeleton set the original bone as a target and re-target every key-framed bone to the new kusudama constrained rig. Thoughts? |
I'm having trouble understanding what these are:
What are simLocalAxes and simConstraintAxes? What space are they in? Global or the bone relative? What is updateTargetHeadings() or updateTipHeadings()? You also swap between getGlobalMBasis and the local Basis. In theory a basis can be both scale and rotation, but I think the code only uses it for rotation. Is GlobalMBasis the concatenation of every basis to the root? |
How is a pin different from a bone tip or a bone target (aka multi effector target)? It was a bit unclear. |
If you mean joint constraints (Kusudamas), they are per bone. If you mean the targets or "pins", those are evaluated over the entire section of the skeleton which they bound. Meaning, if you pin a humanoid skeleton at the pelvis, left wrist, and right wrist; then moving the pin/target of the left wrist around enough would likely result in bones in both the spine and the right arm rotating in order to keep satisfying the requirements of all three targets.
For clarity let's call the original animated skeleton the "source rig", and the target animation skeleton the "destination rig". In theory you don't even need to bother with the A-Pose part. Meaning, if you set the Source Rig to have roughly the same scale as the Destination Rig, then it doesn't matter if the two share the same starting pose, nor does it really even matter if there is a one-to-one mapping in the number of bones between the two rigs. All you would have to do (in theory, but I haven't actually tried it) is decide which joints on the Source Rig best correspond to which joints on the Destination Rig. Then, simply enable targets on the bones of interest in the Destination Rig, and have those targets follow the appropriate joints of interest on the Source Rig. I imagine you would get worse results by trying to match every single bone between the two rigs than by trying to match just the bones of interest.
Yes. GlobalMBasis is basically a cache of the composed transformation of a base with all of its ancestor bases up to and including the transformation at the armature's root. GlobalMBasis is used to avoid recalculating a basis from scratch when none of its parent bases have changed.
The "LocalAxes" part in simLocalAxes is referring only to fact that they are derived from AbstractBone's "localAxes" object. The "localAxes" of a bone are named that due to an artifact of early implementations, which lacked a dedicated scenegraph. Now that you bring it up, I can see how that would be horribly confusing.
Yeah. And translation. Why do you mention it?
A pin and a target are the same thing. A bone-tip is the thing that reaches for a target, aka an end-effector. All of these terms are (again, now that you mention it) also horribly confusing artifacts of previous versions. In future implementations the following conventions will be adopted: |
updateTipHeadings() updates vectors that track the headings of all effectors which are descendants of the bone currently being evaluated by the solver. updateTargetHeadings() updates vectors that track the headings of each of those effector's targets. |
In Godot Engine Basis is 3x3 matrix representing a rotation and a scale. A transform is a rotation, scale and origin. I believe you only use LimitingAxes as a rotation. In the current prototype, I have split the Global Transform from the local transform. Do you think there's any problems with that? Nothing is functional so I can't test. |
I've used these concepts: There's a godot engine bone which is replicated in a Chain (similiar to skeleton) and a ChainItem (similiar to bone). I want to make a ChainItem also an IKTarget depending if it is pinned. Thoughts? There is also a
|
Depends on what you mean by "split," and where.
IIRC, yes.
If I'm understanding correctly, you want to make Bone A, be the target of Bone B? If so, then the correct way to do that would be to define Bone B's Target Transform as having Bone A's Axes as its parent (and as having no rotation or translation in the local space of that parent). That way, Bone B's Target always ends up having the same position and orientation as Bone A. |
We also discussed that changing the limit cone control point and radius is difficult via a vector 3 and a float. They need to be controllable by a graphical representation. We're imagining a handle that rotates the pole extended from the center of the sphere outwards through the control point or moving the point on the sphere directly through a screen space to sphere space transform. Thoughts? |
What I do is show a sphere which the user can click on. Wherever the user clicks / drags, a ray is cast from the camera through the sphere. The closest of those intersections to the camera represents the direction of the LimitCone when transformed to the local reference frame of the constraint.. For letting the user specify radius, I put a little draggable handle on the perimeter of the cone, and the user can drag that outward or inward from the cone's on-screen center to make the cone wider or thinner. That approach is easy to program and intuitive to use, but does have the drawback of requiring that the user change their viewing angle if they want to drag their cone to be on the opposite side of the sphere. The most bang-for-buck solution to programming a convenience for that is to change which of the two cameraray-constraintsphere intersection points is used based on the number of times the user has dragged outside of the sphere in the process of manipulating the cone's control point. |
I had a lot of trouble exporting the processing demos as stand alone executables. The shaders were not able to be found. |
Can you explain how alignToAxes works? Why do you do this? /**
I have a IKAxes class and three variables in a ChainItem which is the substitute of your bones class. // Positions relative to the root bone My problem is converting alignToAxes(). Chain is my substitute for your armature class. This is because Godot Engine already has a Skeleton and Bones. |
Does the issue occur when running the demos from within the processing IDE?
It used to be a convenience method, but I don't think it's used at all anymore.
You've lost me. Are the variables listed (intial_transform, current_pos, current_ori) part of your IKAxes class, or part of your ChainItem (bone) class?
Looks like a modern art installation! |
Here's the https://github.com/fire/Everything-Will-Be-IK-Processing/tree/v3-rc branch with the changes. The key insight is the data folder is where the ".pde" is. I copied the armature and the shaders because it was simpler but less elegant. Can you think of a better way?
"initial_transform, current_pos, current_ori" are part of the ChainItem (Bone) class. |
Sorry, could you be more specific? What is the goal you are trying to accomplish by copying the
If
A bone can have multiple child bones, so what does it mean to define a bone's position or orientation relative to the position or orientation of just one of its children? |
And what's the purpose of trying to get standalone processing binaries exported from the processing IDE? |
My use case was to demonstrate your proof of concept to interested friends without learning git, processing IDE and other misc thing. I wanted other people to evaluate the proof of concept. Someone who's an artist or a rigger. I'm also typing a response for the transforms question. |
That was the first question that came to mind too. Why does the Godot IK api split the rotation in euler angles from the translation when the Transform data structure handles everything. Godot's internal Transform is a 3x4 matrix consisting of a 3x3 matrix for Basis (rotation and scale) and an Vector3 origin for translation. A Basis is convertable to Quat. My thought was to remove the current_ori and current_pos and use one transform. Do I want it to be global or local? Do I want a transform for the global transform (relative to the root bone) and a local transform (relative to the parent bone). |
Maybe a user convenience? (Easier for humans to think in Euler angles).
This is a highly atypical representation. Usually transform matrices are square matrices. In 3D, a 4x4 matrix is used with the fourth column or row representing perspective.
For each Bone, you want a single transformation defined relative to its parent bone's transformation. The bone's global transformation is important for doing a bunch of things, but that global transformation can and should only ever be inferred by composing it with all of its ancestor transformations. The result of that composition should ideally be cached, so that the global transformation is only ever recomputed when one of its ancestor transformations change, and even more ideally so that the recomputation traversal is carried out only up to the ancestor that was changed. That entire process (of defining transformations relative to parent transformations, caching the global transformation that results from composing that transformation with all of its ancestors, and keeping track of whether a global transformation should be recomputed before returning the result to the user), is what the I imagine the Godot engine must have SOME scheme that does something similar with its own transformations internally (or else the Godot engine is highly inefficient).
Almost. A Basis also contains a translation component. So, a Basis is convertible to a Quat + a Vector3.
I see. Yes that's probably a good thing for the library to have in general. |
https://github.com/fire/Everything-Will-Be-IK-Processing/tree/v3-rc Is my branch with the fixes to
I have also made builds here: |
Most 4x4 matrixes can be converted to Godot's 3x4 matrix as I discovered. The Godot::Basis is only rotation and [scale]. For the IK system I've used the ChainItem as a scratchpad for in progress IK changes that are a duplicate of the existing Godot::Skeleton (with bones) system. So if I go to the pure local transform schema, I'd need to add caching. [Edited] Chain is Godot:Skeleton, while ChainItem is a bone equivalent. There's another data structure for a camera. |
Great, thanks!
Yes, it's not difficult. It's just a weird choice.
Well, if it's just a scratchpad, you can just have a getGlobalBasis() function that recomputes the global transform each time you need it, and worry about optimizing later. Alternatively, you can just start with a naive port of AbstractAxes. Ultimately, the IK Solver works by creating and maintaining a separate internal representation of the skeleton anyway (so intermediate computations don't interfere with what's being displayed, and so that results are atomic), so converting between the two representations on the mimicry step wouldn't be much harder than mimicking under the same representation. But I would be fairly surprised if Godot doesn't already have some caching scheme implemented. If it doesn't, you might want to consider taking this opportunity to give it one. I can take a look at what Godot is doing if you wanna point me in the right general direction. |
Alright, I looked into Godot's scheme a bit. I believe their Godot's Let me know if you need help. |
Note that Spatial is a scene node, while Transform and Basis are objects. |
Does it being a scene node present some implementation problem? |
Yes, because I shouldn't need to add new spatial nodes. The bones is in the skeleton node. The skeleton ik cmdd is a different node. Currently I'm trying to discover what is global, what is skeleton global and what is bone local. This explains why the bones are [at] the root of the mesh rather than relative to the other bones. Edited: There's also root of the chain vs a bone in the chain. NOTE: This isn't the root bone of the skeleton. |
Are you sure? Unless you're using an atypical visualization of Bones, it looks rather like your bones start at the hands and end at the feet.
For reference:
(If I didn't make IKPins default transformation be relative to the skeleton's parent, then you should definitely make it like that for your port!) |
I noticed something architecturally different. Thank you for you explanations.
The Godot IK API uses has a (current) limit of a single goal that is in the world. In this demo it is a button on the wall. This project appears to add ik pins all the way to the root. I'm calling pins Godot ChainTargets. Here is the current code. The c++ cpp is the implementation in the same folder. https://github.com/fire/godot/blob/ik-cmdd-kusudama/scene/animation/skeleton_ik_cmdd.h#L631 public void updatePinList() {
pins.clear();
recursivelyAddToPinnedList(pins, humanArmature.getRootBone());
if(pins .size() > 0) {
activePin = pins.get(pins.size()-1);
}
}
public void recursivelyAddToPinnedList(ArrayList<IKPin> pins, Bone descendedFrom) {
ArrayList<Bone> pinnedChildren = (ArrayList<Bone>) descendedFrom.getMostImmediatelyPinnedDescendants();
for(Bone b : pinnedChildren) {
pins.add((IKPin)b.getIKPin());
b.getIKPin().getAxes().setParent(worldAxes);
}
for(Bone b : pinnedChildren) {
ArrayList<Bone> children = b.getChildren();
for(Bone b2 : children) {
recursivelyAddToPinnedList(pins, b2);
}
}
} I wonder if this is a significant change? It is possible to add all the bones from the root (which root? chain root, or skeleton root) to the target array. I'm going to not support multiroots in Godot yet (Godot has support for multiroots). |
I'll post the ik demo here so I can reference it. Note that it has only twist constraints and zero direction constraints. [Edited] Good night! |
I'm not sure what you mean. ewbIK only adds IKpins on the bones the user wants to pin. The Processing code you quoted is just for UI purposes. It traverses the skeleton to find which bones have pins attached, so that the user can cycle through them with the up and down arrows.
Within the actual library itself targets aren't kept track of in an array, they're kept track of in a tree structure represented by the SegmentedArmature class. I can (and probably should) draw a diagram if you feel unclear about what's going on there. |
Ah, I'm stuck trying to find why the bones aren't attached to their previous bone and why the position is at the floor beside the feet. Thanks for your help though. Holidays is almost over, so things will slow down. |
At which point in your code does a bones' local transform get composed with the transforms of its ancestors in order to determine the bone's global position/orientation? |
I'm having a hard time finding the bug. Is there an order of operations than I can verify step by step? The idea is to start from a good point. I found several places that grabbed the skeleton's global pose when given a bone id. |
Oh, sorry. I missed this comment somehow. We should probably try to discuss this over live chat or something. You can walk me through the approach you're taking in Godot, and I can walk you through what the current library expects. You can e-mail me with your preferred chat venue at [email protected]. |
I'm trying to scope out what needs to be ported for the C++ game engine Godot Engine.
The goal is to have realistic IK tracking given three sensors (head-mounted display, two trackers) and at least 3 - 10 VR tracker sensor positions.
My peers had some concerns about FABRIK's unnaturalness around the bending of the elbows, given only three sensor points.
The plan currently is to have analytic solvers for the IK system, but it doesn't have sound theory behind it.
#5 mentioned some calibration needed for scaling the body. We could use a "T" or "A" pose for doing that. Other systems like VRCHAT or HI FIDELITY used the height of the head-mounted display and the hips to give a possible scale. Scaled avatars like huge persons or small persons are a problem.
The text was updated successfully, but these errors were encountered: