**1 Minute VEX** [Aaron Smith][]
Version 1.0.0, 2022-04-19
Copyright 2020-2022 Aaron Smith. All rights reserved. Overview ==================================================================================================== No matter what kind of Houdini tutorials you’ve searched or SideFX demonstrations you’ve seen, you’re likely to have come across the vast, incredibly powerful expression language that is VEX. It has quite the learning curve, as most languages do, but knowing how and when to use it efficiently can be amongst the most rewarding challenges in working with Houdini. These tips are intended for more seasoned Houdini/VEX Users - they are an accumulation of the most useful snippets I have come across and written during the last few years. Having to trawl the internet for obscure code while under pressure to deliver can be very daunting; So I hope the following can be as convenient for you as they have been for myself. If you have any questions, or just want to talk VEX, feel free to send me an email at aaron@aaronsmith.tv U Attribute On Complex Curves using surfacedist() ==================================================================================================== The surfacedist() function --------------------- ![Figure [1mvt_01]: A distance-based U attribute represented by blue-to-red colouring](../images/1mvt_01_thumb.jpg) In this example, we use the surfacedist() function to find a point's distance from a target group. For our purposes, `end_pts` represents the group of points at the tip of our tree's branches. By iterating surfacedist() over each point, we can build a range of distances to this target group, beginning at the furthest edge, and always ending at 0 (representing no remaining distance to the target). We can then use a handy trick promoting the greatest value found to a detail attribute. Finally, we use a second wrangle to fit the min/max distances to a 0-1 range; As the `u` attribute on a series of complex interconnected curves. ### Houdini Implementation
Let’s initialize our surfdist min & max attributes: ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C // VEX surfacedist - Point Wrangle example int closest_pt; // Find the distance from point along edges to the target point group. f@surfdist = surfacedist(0, "end_pts", "P", @ptnum, closest_pt, "edge"); // Store the max point edge distance as detail attribute. setdetailattrib(0, "surfdist_max", f@surfdist, "max"); ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [Listing [imvt1-ptwA]: [VEXpression - Point Wrangle 1] The surfacedist() function]
Now we have surface distance attribute and its' max value, we can constrain it to a 0-1 range. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C // Fit the distance to end_pts in a 0-1 range. float sd_min = 0; float sd_max = detail(0, "surfdist_max"); // Fit the edge distance into a 0-1 range. @u = fit(f@surfdist, sd_min, sd_max, 0, 1); ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [Listing [imvt1-ptwB]: [VEXpression - Point Wrangle 2] Fitting the surfdist attribute]
Sampling Attributes with uvdist() and primuv() ==================================================================================================== The uvdist() function --------------------- ![Figure [1mvt_02]: Sampling an arbitrary U position represented by an orange sphere](../images/1mvt_02_thumb.jpg) In this example, we continue with our tree - assigning `u` to the 1st vector component of `uv`. By isolating our primitive number (converting it to a string group name), we can use the uvdist function to detect how far our sample position `spos` is from the current prim. We then compare our distance to a minimum tolerance; If the sampled distance to the prim is less, `primuv` (in conjunction with our exported `dprim` and `duv` variables) is used to extract our desired attribute across all possible instances. ### Houdini Implementation
Using a primitive wrangle to iterate over all of our object's curves, let's evaluate our desired sample position. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C // VEX uvdist & primuv - Primitive Wrangle example // Assign u sample variable from spare parameter. vector spos = set(chf("sample_u"), 0, 0); // Assign current primitive as variable group name. string prnum = itoa(@primnum); // Export UV distance, parametric coordinates at UV position. int dprim; vector duv; float dist = uvdist(0, prnum, "uv", spos, dprim, duv); // Assign variable minimum tolerance for uv distance sampling. float tol = pow(10, -8); if(dist < tol) { // Sample world position using parametric coordinates. vector pos = primuv(0, "P", dprim, duv); // Add a new point, with point group at sampled world position int newpt = addpoint(geoself(), pos); setpointgroup(geoself(), "uv_sampled_pts", newpt, 1, "set"); } ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [Listing [imvt2-prwA]: [VEXpression - Primitive Wrangle 1] The uvdist() & primuv() functions]
Ray-Cast Ambient Occlusion using intersect() ==================================================================================================== The intersect() & sample_hemisphere() functions --------------------- ![Figure [1mvt_03]: Point-based ambient occlusion applied to a hi-res mesh](../images/1mvt_03_thumb.jpg) In this example, we begin by creating a variable for position - offsetting it from the surface very slightly to avoid unintended ray intersections. Once we have assigned how many samples we want, and the hemispherical radius of our ambient occlusion (which determines how close geometry has to be to occlude), we use a for loop to iterate over each sample. We then generate a random hemispherical direction using our point normal, and use that directional vector as the ray. If our ray hits geometry, its distance from initial position is fit within a 1-0 range - 1 being the closest a ray could possibly be, and 0 being the furthest. This value is added and accumulated through the `ao` variable, and then divided by the total number of samples added to the variable. We finally return the complement, in order to make close intersections dark, and distant rays / non intersections bright. ### Houdini Implementation
Using a point wrangle, we create a loop that will sample and average occluding ray intersections. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C // VEX sample_hemisphere & intersect - Point Wrangle example // Assign initial variables, including P with small surface offset. vector pos = @P + (@N * pow(10, -6)); int samples = 256; float radius = 0.1; float ao; for(int i = 0; i < samples; i++) { // For each sample, create a random hemispherical direction using N. vector2 seed = rand(@ptnum + i); vector dir = sample_hemisphere(@N, seed); // Export position of directional intersection, limited to radius. vector ipos; vector iuvw; float isect = intersect(0, pos, dir * radius, ipos, iuvw); // If intersection is found, fit ray length into a 1-0 range and // add the result to the accumulating variable 'ao'. if(isect != -1) { ao += fit(distance(ipos, pos), 0, radius, 1, 0); } } // When all samples are iterated over, divide the total ao sum by // total number of samples, then returning its complement. f@ao = 1 - (ao / samples); ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [Listing [imvt3-ptwA]: [VEXpression - Point Wrangle 1] The intersect() function]
NDC Space (Normalized Device Coordinates) & VEX ==================================================================================================== An Introduction to Normalized Device Coordinates --------------------- Normalized Device Coordinate or NDC space is a coordinate system used in rendering, mapping our display to a cube (known as the ‘view volume’) wherein x, y and z are within the range -1 to 1. Transforming vertices to NDC space is the essential intermediary between world space and screen space (our 3D mapped to 2D pixels) - and can be a tricky subject to understand without any pre-existing knowledge on linear mapping. Luckily, SideFX has done the heavy lifting for us! Using toNDC() we can provide a camera and point position in order to translate `P` into its respective NDC. Part 1: Scaling Objects by Camera NDC --------------------- ![Figure [1mvt_04]: Scaling a teapot in NDC space](../images/1mvt_04_thumb.jpg) In this example, we begin by providing a camera path as string. We use the chs() function, as this spare parameter has a great deal of node finding utility. Once we have converted `P` to NDC, we can multiply the z component of our new coordinates. This will keep our points within the exact same display position, but drawing that relative position closer/further to our camera. Now that we have our adjusted position, we convert our NDC back to world position and assign to `P`. ### Houdini Implementation
Using a point wrangle, find our point's position in a camera's relative space and modify the resulting position. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C // VEX toNDC & fromNDC - Point Wrangle Example // Assign camera variable using OBJ path as string. string cam = chs("camera_obj"); // Convert point position to camera's normalized device coordinates. vector p_ndc = toNDC(cam, @P); // Use Z component of NDC to move object points 'backward'. p_ndc[2] *= chf("scale_by_cam_dist"); // Convert point NDC position back to world position and assign to P. @P = fromNDC(cam, p_ndc); ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [Listing [imvt4-ptwA]: [VEXpression - Point Wrangle 1] Scaling with NDC Space]
Part 2: Culling Points by Camera NDC --------------------- ![Figure [1mvt_05]: Colouring a group of points by their NDC bounds](../images/1mvt_05_thumb.jpg) For this example, we begin once again by creating a variable camera path as a string parameter. We then set up our `cull_scale` which is the proportion of the display that will be culled. In order to have our parameter retain 0% of the display at 0, and 100% at 1, we multiply the scale by 0.5 (1 / number of axes) and initialize our `cull_min` and `cull_max` at the same - with the min being subtracted from, and the max being added to. Once we have converted our `P` attribute to NDC space, we can compare each float component of our vector to its relevant min/max positions. If it is outside the min/max, an array value will be 1 (true). Z values above 0 are culled because NDC space is a right hand coordinate system. Finally, using foreach(), we loop over each comparison. If the axis has returned true for sitting outside of our bounds, we remove the point and exit the looping process. ### Houdini Implementation
Using a point wrangle, find our point's position in a camera's relative space and cull it if it lies outside of the camera frustum. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C // VEX toNDC & removepoint - Point Wrangle Example // Assign camera variable using OBJ path as string. string cam = chs("camera_obj"); // Assign cull scale, halved to fit parameter culling in 0 to 1+ range. float cull_scale = chf("cull_scale") * 0.5; float cull_min = 0.5 - cull_scale; float cull_max = 0.5 + cull_scale; // Convert point position to camera NDC. vector p_ndc = toNDC(cam, @P); // Create an array for comparing each axis with its min/max NDC bounds. int is_culled[] = array(p_ndc.x < cull_min || p_ndc.x > cull_max, p_ndc.y < cull_min || p_ndc.y > cull_max, p_ndc.z > 0); // For each array item, check if its comparison has returned true. foreach(int cull; is_culled){ // If true, remove the current point and exit the loop. if(cull){ removepoint(geoself(), @ptnum); break; } } ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [Listing [imvt5-ptwA]: [VEXpression - Point Wrangle 1] Culling with NDC Space]
Camera Occlusion Culling using intersect() ==================================================================================================== An Introduction to Occlusion Culling in Houdini --------------------- Occlusion culling is the process of removing geometry unseen by the camera. It is a technique often used in video game rendering, with the purpose of optimising performance and keeping the time to generate a single frame low. In Houdini, occlusion culling can be a great way to keep particle sim file sizes low, and separate what is in view from what is not. The intersect() & optransform() functions --------------------- ![Figure [1mvt_06]: Colouring (and clipping) lines in red that intersect on their path to the camera](../images/1mvt_06_thumb.jpg) In this example, we begin by creating a string path variable for our occlusion object. While functions we have used so far take raw string paths as their input, to use intersect we must pre-append our path with `op:`. We then create a transformation matrix from camera path using optransform(); This allows us to multiply said matrix with an initialised position and return our camera’s position. Next, we need to find the direction and length of our point-to-camera ray. For direction, we subtract point position from camera position and normalize it. For length, we simply use the distance() function. Once we have all of our variables, we assign them to their respective positions with intersect(). If the ray (direction multiplied by distance) does not hit our geometry, it will return -1. Therefore, we remove our point if it returns anything else. ### Houdini Implementation
Using a point wrangle, we must find the occluding OBJ operator, and find out if our points intersect the path between it and our camera source. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C // VEX optransform & intersect - Point Wrangle Example // Assign occlusion object as SOP path with 'op:' syntax. string objop = "op:" + chs("occlusion_obj"); // Find the transform matrix associated with camera path, and point pos. matrix camxform = optransform(chs("camera_obj")); vector pos = @P; // Create the vector camera position from its transform matrix. vector campos = set(0,0,0) * camxform; // Create and normalize the point-to-camera direction. vector camdir = normalize(campos - pos); // Find point-to-camera distance to use as intersect max distance. float camdist = distance(pos, campos); // Check if point-to-camera ray intersects the occlusion geometry within // max distance. If a prim is returned, remove the current point. vector ipos; vector iuvw; int iprim = intersect(objop, pos, camdir * camdist, ipos, iuvw); if(iprim != -1){ removepoint(geoself(), @ptnum); } ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [Listing [imvt6-ptwA]: [VEXpression - Point Wrangle 1] Point to camera occlusion]
Colorspace transformation using OCIO & ocio_transform() ==================================================================================================== The colormap() and ocio_transform() functions --------------------- ![Figure [1mvt_07]: Applying an image of the Peak District to a hi-res mesh as point colours](../images/1mvt_07_thumb.jpg) In this example, we begin by finding the first linear vertex sharing the point we are currently iterating over. We can then use our vertex number to get the associated uv attribute value. The reason we extract uv with this method and not by simply typing `@uv` is because we would otherwise be binding & promoting uv to a point attribute- which could create a range issues later on. We then create a string channel to use with a disk path to our image; And with this we have all of our initial variables. Using the colormap() function, we can get a filtered position on our image and use a very convenient syntax to specify what behaviours we might want the image to adhere to. In this example, the image is set to repeat outside of the uv boundaries. However, our example image is a jpg, natively in the sRGB colourspace. If we are using another colour-space such as ACES, we will want to correct our now-linearised values appropriately. Using the ocio_transform() function, we convert our colour vector from linear sRGB to ACESCG. ### A Note on OCIO & ACES OCIO and ACES are tricky subjects to explore if you are not familiar with why they are used or what they do in principle. If you are interested in understanding them more, feel free to read these great write-ups on both: [_OCIO support in Houdini - A brief overview on OpenColorIO in Houdini_](https://www.sidefx.com/docs/houdini/io/ocio.html) [An Idiot’s Guide to ACES - An artist friendly guide to ACES in Houdini](https://www.toadstorm.com/blog/?p=694) ### Houdini Implementation
Using a point wrangle, we can sample an image path/position and transform it per-point. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C // VEX colormap & ocio_transform - Point Wrangle Example // Find the first linear vertex to share this point. int vtx = pointvertex(0, @ptnum); // Get uv attribute value from selected vertex. vector uv = vertex(0, "uv", vtx); // Create image path parameter as string variable. string imgpath = chs("image_path"); // Get colour from image, using uv coordinate as sample point. vector imgcol = colormap(imgpath, uv, "wrap", "repeat"); // Transform image from sRGB to ACES. @Cd = ocio_transform("lin_srgb", "acescg", imgcol); ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [Listing [imvt7-ptwA]: [VEXpression - Point Wrangle 1] Colorspace conversion]
Sampling textures with xyzdist() and UDIMs ==================================================================================================== The colormap() and expand_udim() functions --------------------- ![Figure [1mvt_08]: Sampling a UDIM-applicable texture set to points by a nearby cube](../images/1mvt_08_thumb.jpg) In this example, we explore a method for sampling the nearest texture to a point, in higher detail than exists on the target surface. For this to work, we first need to begin by assigning the texture path as a string attribute on the target as a primitive attribute. Then, with our points we intend to sample with, we create a point wrangle and the initial export variables to be used in conjunction with xyzdist(). This function primarily returns the distance to the nearest surface point on a target, but also allows us to export the sampled primitive number and parametric uv position into respective variables. We then use our nearest prim number to read our image path string. In order for UDIM filename expansion to work, we first need to check that it is actually possible - using has_udim() in an if statement. If this is returned true, we use expand_udim() to check the nearest uv position against the required UDIM number. This will subsequently overwrite the special character sequence with the absolute path. Now we have our evaluated image path, we can use colormap() to sample our texture against the nearest uv position; Then finally correcting its colourspace with ocio_transform(). ### A Note on UDIM Filename Expansion In UV space, UDIMs are a great way to represent texture maps in positions that do not adhere to the standard 0-1 range a single map might occupy. However, converting UV coordinates to a UDIM position can be a little bit tricky, so here we explore some functions that make image path processing a little bit easier. ### Houdini Implementation
Before we sample an object for its' texture path, we must provide a texture path attribute to the target object. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C // Initial wrangle to hold texture path information. // Use a string attribute to specify the image used s@image_path = chs("image_path"); ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [Listing [imvt8-prwA]: [VEXpression - Primitive Wrangle 1] Assigning texture path attribute]
Using a point wrangle, we can sample an image path/position and transform it per-point. ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C // VEX xyzdist, primuv & colormap - Prim & Point Wrangle Example int xprim; vector xuvw; float xdist = xyzdist(1, @P, xprim, xuvw); // Find uv position with exported xyzdist values. vector uv = primuv(1, "uv", xprim, xuvw); // With nearest prim, find its previously stored image path. string imgpath = prim(1, "image_path", xprim); // If image path contains the UDIM/UVTILE special character sequence, // use uv position to find its tile associated map path. if(has_udim(imgpath)){ imgpath = expand_udim(uv.x, uv.y, imgpath, 0); } // Sample the nearest filtered colour using checked path and transform. vector imgcol = colormap(imgpath, uv, "wrap", "repeat"); @Cd = ocio_transform("lin_srgb", "acescg", imgcol); ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [Listing [imvt8-ptwB]: [VEXpression - Point Wrangle 1] Sampling texture path attribute]
Triplanar Mapping & Projection using colormap() ==================================================================================================== Triplanar projection & colormap() functions --------------------- ![Figure [1mvt_09]: Applying triplanar-esque shader sampling to a hi-res mesh](../images/1mvt_09_thumb.jpg) In this example, we recreate the classic triplanar projection - often used to tile textures without the need for UVs, at a point wrangle level. We begin once again by creating our variable `imgpath` from a string parameter, but this time accompanying it with the vector point position. In order to control the projection like you would in-shader, we add a vector to the initial position (as an offset) and then multiply (for frequency). We then create a blank variable for our resulting colour, and an absolute (positive) equivalent of the normal. `N` is returned this way to keep the projection on 3 axes, so that negative axes are not occluded. Using a for loop, we can now iterate over these axes and set our vector `uv` as the current planar coordinates. Using modulo is a great way to loop through a set of numbers, allowing XYZ to be set as ZXY and YZX on the next two iterations. Knowing our position in the loop, we use the third orthogonal vector (perpendicular to our first two axes) and point normal `N` to calculate the dot product- finding our angle from current axis in radians and converting it to degrees. We then fit this angle within a specified range for use as the projection mask, masking the filtered pixel found by colormap() into our `imgcol` variable. Once the axes have been accumulated, they can be assigned to the `@Cd` attribute to be visualized. ### Houdini Implementation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C // VEX colormap triplanar - Point Wrangle Example // Assign each initial variable, with controls for uv position. string imgpath = chs("texturepath"); vector pos = (@P + chv("proj_offset")) * chf("proj_freq"); // Iterate over axes, with a blank colour vector and absolute normal. vector imgcol; vector N = abs(@N); for(int i = 0; i < 3; i++){ // Set the uv position as P on currently selected projection plane. vector uv = set(pos[(1 + i) % 3], pos[(2 + i) % 3], 0); // Set the up vector as the orthagonal component. vector up = set(0,0,0); setcomp(up, 1, (3 + i) % 3); // Get the relative angle of N to our plane, and convert to degrees. float angle = acos(dot(up, N)) / (PI / 180.0); // Within a given angular threshold, create the planar mask. float mask = fit(angle, 0, chf("proj_angle"), 1, 0); mask = chramp("mask_ramp", mask, 0); // Apply the texture map to the planar projection uv coordinates. vector projmap = colormap(imgpath, uv, "mode", "repeat"); imgcol = lerp(imgcol, projmap, mask); } @Cd = ocio_transform("lin_srgb", "acescg", imgcol); ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [Listing [imvt9-ptwA]: [VEXpression - Point Wrangle 1] Triplanar projection and texture sampling]
Weighted Integer Sampling using sample_discrete() ==================================================================================================== The sample_discrete() function --------------------- ![Figure [1mvt_10]: Spheres sampling one of three colours by their relative position in y mixed with a random value](../images/1mvt_10_thumb.jpg) In this example, we use the sample_discrete function to make a weighted selection from an array. Because sample_discrete() only works with values in a 0-1 range, we begin by finding the position of our point relative to the input bounding box, using relbbox(). These values are uniformly sampled, so we can create a filtered/dithered effect by interpolating our relative position with a random 0-1 float. Using lerp() and rand(), we can introduce and adjust this effect. Once we have built an array with the desired weight values (in this case a series of parameters), we use sample_discrete() to return an integer between 0 (the first item in the list) and 2 (the last item in the list) at the sampled u value. Using this value, we can use it to drive a variety of different useful functions, such as choosing a colour from another, equally sized array; Or assigning the name of a new point group by converting the value to a string (with the itoa() function). ### Houdini Implementation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C // VEX sample_discrete - Point Wrangle Example // Find the relative position of P to the first input bounding box. vector relpos = relbbox(0, @P); // Assign variable for how random u will be. float rand_weight = chf("rand_weight"); // Interpolate between pt seeded rand and relpos in x. float upos = lerp(relpos.x, rand(@ptnum), rand_weight); // Create an array of float parms for weighted sampling. float weights[] = array(chf("weight_1"), chf("weight_2"), chf("weight_3")); // Sample the int array item at u across weighted values. int choice = sample_discrete(weights, upos); // Create an array of colours to use choice with. vector colours[] = array(chv("colour_1"), chv("colour_2"), chv("colour_3")); // Set point group name as selection choice. setpointgroup(geoself(), "choice_" + itoa(choice), @ptnum, 1, "set"); @Cd = colours[choice]; ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [Listing [imvt10-ptwA]: [VEXpression - Point Wrangle 1] The sample_discrete() function]
Iterating Over Integer Attributes using uniquevals() ==================================================================================================== Finding prim patch area --------------------- In this example, we use the uniquevals(), findattribval() and findattribvalcount() functions to iterate over prim patches - defined by a prim attribute `id` (in this case, uv islands). We begin by using a string channel parameter to assign our first variable, written as `island` - the id associated with the uvlayout SOP. Knowing all of our subsequent operations will be run over primitives, we create the string attrib type variable as `prim`. The uniquevals() function --------------------- ![Figure [1mvt_11]: Colouring uv patches by their relative scale](../images/1mvt_11_thumb.jpg) Using uniquevals(), we then fill an array with all unique instances of our primitive attribute ids. We can iterate over these using foreach(), as each id value represents an individual patch. We then create a blank array for future prims to be pushed to, alongside a blank patcharea float to be accumulated. findattribvalcount() then allows us to iterate through a second dimension - the number of prims currently holding our id number. However, to iterate through these prims we need a for loop (within our current foreach loop) and the findattribval() function, which allows us to specify which prim we may be looking for by providing an iterator (in this case, `i`). Then, using primintrinsic() to look up the total prim area, we add this to `patcharea` and push the current prim number to our patch prims array. However, in order to ensure our area is safely calculated, we use isnan() to check whether or not the float is a ‘normal number’, as `measuredarea` can sometimes throw up illegal values. Now that we have the total area for our patch calculated, we can loop over each previously found primitive and set the appropriate attribute value. ### Houdini Implementation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C // VEX uniquevals & findattribval - Detail Wrangle Example // Assign attrib patch name variable, and attrib type as string prim. string atrname = chs("attrib_name"); string atrtype = "prim"; // Find all unique patch ids on the geometry and iterate over them. int ids[] = uniquevals(0, atrtype, atrname); foreach(int id; ids){ // Create a blank array for prims found, and initialized patch size. int patchprims[]; float patcharea = 0; // Find number of prims with current patch id and iterate over them. int idnum = findattribvalcount(0, atrtype, atrname, id); for(int i = 0; i < idnum; i++){ // Find prim associated with current patch id value iteration. int prim = findattribval(0, atrtype, atrname, id, i); // Find the intrinsic area attribute of current patch prim. float primarea = primintrinsic(0, "measuredarea", prim); // If eligible, accumulate patch area by adding in prim area. if(isnan(primarea) != 1){ patcharea += primarea; } // Append current prim to list of iterated patch prims. append(patchprims, prim); } // For each iterated prim, set patch area prim attribute. foreach(int pprim; patchprims){ setprimattrib(0, "patcharea", pprim, patcharea, "set"); } } ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [Listing [imvt11-dtwA]: [VEXpression - Detail Wrangle 1] The uniquevals() function]
Attribute Gradient Estimation From Neighbours ==================================================================================================== Weighted averaging using neighbours() & foreach() loops --------------------- ![Figure [1mvt_12]: Angling and colouring arrows by their attribute gradient](../images/1mvt_12_thumb.jpg) In this example, we explore point attrib gradients in Houdini, and a quick and cheap method in which we can estimate the gradient direction of an attribute by looking at our point neighbours. First we begin by finding three variables - our attrib (gradient) name, the attrib value (from name), and the point position. We then generate an array containing the point numbers for all known point neighbours, using the neighbours() function. We then initialize two arrays - `weights` for incoming weights, and `dirs` for incoming directions found. Iterating over the neighbours using foreach(), we find the desired attribute value of each neighbour and determine how different the neighbour’s values are using the fit() function. This will clamp gradient values that are less than the current point’s at 0, and will create a weight in the range of 0-1 for values between the current gradient and 1. Now that we have our weight, we can append it to `weights` and after finding the direction of this neighbour to our point, append the weighted direction to our `dirs` array. With all of our neighbouring directions and weights found, we divide the the sum of all directions by the sum of all weights to find the weighted average gradient direction. ### Houdini Implementation
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ C // VEX neighbours & weighted averaging - Point Wrangle Example // Assign string variable for gradient attrib name. string atr_name = chs("gradient_attrib"); // Assign variables for point gradient and position. float grad = point(0, atr_name, @ptnum); vector pos = @P; // Find all neighbouring point numbers as an array. int neighbours[] = neighbours(0, @ptnum); // Create empty arrays for neighbouring weight and direction values. float weights[]; vector dirs[]; // Iterate over neighbours, finding their point gradient and positions. foreach(int npt; neighbours){ float n_grad = point(0, atr_name, npt); vector n_pos = point(0, "P", npt); // Weight neighbour gradient by difference to current point and 1. float weight = fit(grad, n_grad, 1, 0, 1); append(weights, weight); // Find vector direction/magnitude and append to dir array. vector n_dir = n_pos - pos; append(dirs, n_dir * weight); } // Find the weighted gradient sum of all neighbouring directions. v@direction = sum(dirs) / sum(weights); ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ [Listing [imvt12-ptwA]: [VEXpression - Point Wrangle 1] Weighted averaging]
[Aaron Smith]: https://github.com/aaronsmithtv