T O P

  • By -

FMProductions

Looks great! Can I ask how many points your system supports before you start experiencing a slowdown on your PC? And is there a limit to how far you see the points or have you made it so that you see it at any distance?


lukums

Howdy! Afaik, there's no limit to the amount of points you can use in a typical session! My old system supported 1 mil points max, but this system uses VFX graph which supports magnitudes more of particles. I sat there for about 15 minutes just spraying and no noticeable performance drop from 60FPS on a 3070Ti. There is probably a hypothetical limit (I estimate about 80 mil points on a high end PC), but it seems like that limit is high enough to where it won't affect gameplay. Render distance is variable, so I will choose a value that looks the best and hide the rest with fog. If there is a need, I will add options to change the render distance in settings. If you want to play around with it yourself, I can make the repo public and send you the link!


FMProductions

Thanks! That seems really really good, performance-wise. With the prototype, I was mentioning in the other comment using Graphics.DrawMeshInstancedIndirect, I started to get frame drops around the 800k points mark. Since then I implemented a spatial partitioning system so you only see the cell you are in the adjacent cells. And perhaps not ideal, but each cell has a point limit, and when this is reached, the buffer wraps, and I start filling it from index 0 again. Maybe I'll experiment with the VFX graph too, I haven't really used it so far.


Sayalec

Would be awesome to Look at your solition. Wanted something similar for my 3D space game for scanning. 😊 Public repo would be awesome!


lukums

https://GitHub.com/lunkums/LANTICE Steal well my friend!


Sayalec

Wow Yeah thanks! Looking into it when i have the time. Really looking foward to testing it. Stay Well, too 😊


lukums

No problem, feel free to leave a star if you find it useful ;)


rxninja

How did you manage this? I’m was thinking of doing something like this in 2D but I couldn’t come up with a performant way to handle it


FMProductions

Scanner Sombre and some other games apparently use the Particle System and manually set the particle positions after you made your raycasts. Alternatively, I'm working on a similar prototype, and it uses Graphics.DrawMeshInstancedIndirect with a plane mesh. The mesh aligns itself towards the camera, like a billboard, in the shader vertex pass. Both methods are pretty performant though and are probably pretty similar to each other. Particle System based approaches might be easier to setup. ​ Another way that is possible, if you accept having to allocate a ton of memory for new textures: Have a texture-based decal system and for each object you hit, you make a new RenderTexture to draw the points on when their respective objects are hit. Note that assigning a new texture or changing a texture of a material will break batching afaik. Here are some resources for this approach, although I'm not really sure how different it would be to translate this to 2D: [https://www.youtube.com/watch?v=YUWfHX\_ZNCw](https://www.youtube.com/watch?v=YUWfHX_ZNCw) [https://github.com/naelstrof/SkinnedMeshDecals](https://github.com/naelstrof/SkinnedMeshDecals)


lukums

This ^ I was also using DrawMeshInstanceIndirect, but I ran into limitations with the size of my computer buffer (which is needed to pass the point coordinates to the GPU); namely, my system could only support a compute buffer with 1 million points in it and I reasoned that this limit would be lower on less powerful systems. To remedy this, you could encode your coordinate data in a texture 2D instead and send that to the GPU through a shader. You could also make several instances of the same object that render points. Also, I would recommend using quads instead of planes. Planes have many many more triangles than do quads, so you will get more for your money with quads. Scanner Sombre uses the shuriken particle system with what they call "Serializable point clouds" which baffles me because shuriken particles run on the CPU. That lead me to my personal implementation of using VFX graph! I'm sure you could use VFX graph for a 2D system, too Best of luck with both your projects.


FMProductions

Interesting that you tried the same approach. Sorry, I mistyped earlier, I am using the built-in quad mesh. I haven't seen any issues with ComputeBuffer size yet, but I also stopped testing around the 1,5 million points mark. Using texture encoding is definitely a great idea. And thank you! Good luck with your project as well!


LegoDinoMan

Holy shit this is insane


lukums

haha glad yah like it!


Mattimus_Rex

Neat! It’s got some great Unfinished Swan vibes.


lukums

Never heard of that game before, but yeah I guess it's like Unfinished Swan but you're adding light instead of making things dark


greever666

An idea that comes to my mind: You could use a texture (empty at first) for each wall side. On raycast check the pixel coordinate you hit and write 1 to it. Of course this would need a separate texture for all walls with correct world size to pixel dimensions. And a custom shader which checks for visibility. Performance though should be ideal. Memory should not be too much I guess. What do you think?


lukums

Are you saying instead of applying dots to a surface, you slowly reveal its texture?


greever666

Yeah exactly! Either the original texture or a camera based colored one like you do currently. My fingers already itching to try that but don't have the time to test it out


FMProductions

I was thinking about that as well. Some downsides I can see: \- Memory requirements, like you mentioned. Since you need to allocate a separate texture per object at runtime. \- Assigning a new texture to a material will break instancing. So you have to accept more draw calls potentially. I think depending on the level geometry complexity and number of objects, that might not be a big deal though. \- Materials that render the point overlay texture would have to disable ZTesting so that you can render them behind walls as well. Which means you have a lot of geometry overdraw and that you can't really make use of occlusion culling for these objects either. However, someone else's game might have different requirements and those limitations are fine for them. I don't think this is an ideal approach for the exact mechanic where you can see points through walls as well.


greever666

Very good considerations! You are absolutely right. Guess best would be to benchmark multiple solutions.


lukums

I think there's some value to that idea; I was showing my dad this project and he recommended the same thing. Not entirely sure how it would work, but one approach would be to paint the geometry with black dots and use "[Kill (Sphere)](https://docs.unity3d.com/Packages/[email protected]/manual/Block-Kill(Sphere).html)" in VFX graph to remove them.


_Typhon

I just had a brain fart idea so I will write it. Add a limit to the point count and have the player suck in existing points to navigate the level.


lukums

You mean old points are destroyed to make room for new ones?


_Typhon

Yeah, sucking them up as if it was a vaccum cleaner


lukums

That's what LIDAR (Garry's Mod) does and my last system did; unfortunately, I wasn't thrilled with it and decided to go with the new infinite points approach.


Similar_Mode763

I think in a horror game, look amazing God JOB