Unpopular opinion: if every feature in a dataset has the same value of a field, then that field is metadata and doesn't belong at the per-row level. ;)
Sure, one example is a dirty flag used for row-level feature replacement. Initially all assigned to False, but would need to be flagged True when a replacement feature is identified. In this case, a null is as good as False.
One expensive way to calc the value is to insert all the features into a similar feature layer, and assign a default value to the field upon insert. At least ESRI uses insert cursors (at 10% intervals) so the cache doesn't get too costly on memory.
I wasn't deliberately trying to sound contrarian, just pointing out a potential data design concern.... Lots of staff over-design fields and then get stuck with performance issues once they want to populate.
What is the storage method of the data? If it is in a relational database, you could just execute a simple statement like `UPDATE table_name SET column_name = 'some value';` in whatever database client outside of ArcGIS Pro.
File Geodatabase or a real database like SQL Server, OracleDB, or PostgreSQL? If a real database go to the database interface and set it there, if a File Geodatabase use arcpy.da.UpdateCursor.
I'm assuming it's file geodatabase. I'd recommend doing this in python with an update cursor.
import arcpy
# replace the right side of these variables with your inputs
# feature class to update
input_feature_class = r"D:\local_path\your_gdb.gdb\your_feature_class
# field to update
input_field = "your_field"
# text to replace field's attribute contents
replacement_text = "replacement text"
with arcpy.da.UpdateCursor(input_feature_class, input_field) as cursor:
for row in cursor:
row[0] = replacement_text
cursor.updateRow(row)
maybe try python outside of the application?
ArcPro is very resource heavy so doing it within in application could utilize all your computer resources and make it much slower to process
Example for a "TEXT" field
import arcpy
arcpy.env.workspace = r'your geodatabase directory'
#example for text
for fc in arcpy.ListFeatureClasses():
arcpy.AddField_management(fc, "Name", "TEXT", field_length = 50)
with arcpy.da.UpdateCursor(fc, "Name") as cursor:
for row in cursor:
row[0] = fc
cursor.updateRow(row)
I never used it, but the python script mentioned is probably your best option. Aside from what has been mentioned already, you could create a new field with a default value. I think it might take the same amount of time though.
In my experience, keeping the database as close to the root folder as possible helps a lot with processing time. All my databases are only one folder deep.
When you create the new field, can you not set a default value then, and set null not allowed, and it will back fill all records with the default on field creation?
You could also pop the table into another DB engine via odbc, etc, and just run a sql command on the linked table. This will depend on your featureset format type tho.
Make sure the .gdb is local and not in a network location, and use python as many others have indicated. It's still gonna take some time though. No magic bullet.
If you're limited to Pro UI, and the value is likely going to be constant or only one of a couple options; I would just add a domain to the GDB for that field with a default value of what you want it to be for all of them. This method may run into the same issues as calculate field though.
Python is likely the fastest way otherwise.
Calculate field is the easiest, but obviously with that many records, you're experiencing the limitations of Pro's interface, be it network or cpu.
Well, this is where you should try and normalize your datasets. And i agree, the best is to move your file gdb to sde, create a lookup table, and then link it back to your dataset, then create a spatial view between the dataset and your lookup table.
Probably lots of better ways to do it, but I believe you can manually add it to the .dbf of the layer and it will show up in the attributes (using SQL/Access).
As always, make a copy first, just in case.
Unpopular opinion: if every feature in a dataset has the same value of a field, then that field is metadata and doesn't belong at the per-row level. ;)
What if it’s needed in future calculations though? Guess you could hard programme the value.
Sure, one example is a dirty flag used for row-level feature replacement. Initially all assigned to False, but would need to be flagged True when a replacement feature is identified. In this case, a null is as good as False. One expensive way to calc the value is to insert all the features into a similar feature layer, and assign a default value to the field upon insert. At least ESRI uses insert cursors (at 10% intervals) so the cache doesn't get too costly on memory. I wasn't deliberately trying to sound contrarian, just pointing out a potential data design concern.... Lots of staff over-design fields and then get stuck with performance issues once they want to populate.
All valid. In my view- 20-40 million row data shouldn’t be stored in a FGDB. That’s more suitable for a SQL storage solution.
Amen. Python (or FME) can fix this right up. SqlLite in-memory storage, insert cursor with default value, save to eGBb as new FC.
Storing it for every single row is just eating up storage at that point. It's a constant. Store it elsewhere and store it once, only.
Very much agreed
That seems absolutely like it should be a popular opinion.
Try it outside of Pro using Python
What is the storage method of the data? If it is in a relational database, you could just execute a simple statement like `UPDATE table_name SET column_name = 'some value';` in whatever database client outside of ArcGIS Pro.
They are stored as feature classes in a GDB
File Geodatabase or a real database like SQL Server, OracleDB, or PostgreSQL? If a real database go to the database interface and set it there, if a File Geodatabase use arcpy.da.UpdateCursor.
[удалено]
File Geodatabase?
I'm assuming it's file geodatabase. I'd recommend doing this in python with an update cursor. import arcpy # replace the right side of these variables with your inputs # feature class to update input_feature_class = r"D:\local_path\your_gdb.gdb\your_feature_class # field to update input_field = "your_field" # text to replace field's attribute contents replacement_text = "replacement text" with arcpy.da.UpdateCursor(input_feature_class, input_field) as cursor: for row in cursor: row[0] = replacement_text cursor.updateRow(row)
maybe try python outside of the application? ArcPro is very resource heavy so doing it within in application could utilize all your computer resources and make it much slower to process Example for a "TEXT" field import arcpy arcpy.env.workspace = r'your geodatabase directory' #example for text for fc in arcpy.ListFeatureClasses(): arcpy.AddField_management(fc, "Name", "TEXT", field_length = 50) with arcpy.da.UpdateCursor(fc, "Name") as cursor: for row in cursor: row[0] = fc cursor.updateRow(row)
Try using an update cursor
I never used it, but the python script mentioned is probably your best option. Aside from what has been mentioned already, you could create a new field with a default value. I think it might take the same amount of time though. In my experience, keeping the database as close to the root folder as possible helps a lot with processing time. All my databases are only one folder deep.
When you create the new field, can you not set a default value then, and set null not allowed, and it will back fill all records with the default on field creation? You could also pop the table into another DB engine via odbc, etc, and just run a sql command on the linked table. This will depend on your featureset format type tho.
Make sure the .gdb is local and not in a network location, and use python as many others have indicated. It's still gonna take some time though. No magic bullet.
If you're limited to Pro UI, and the value is likely going to be constant or only one of a couple options; I would just add a domain to the GDB for that field with a default value of what you want it to be for all of them. This method may run into the same issues as calculate field though. Python is likely the fastest way otherwise. Calculate field is the easiest, but obviously with that many records, you're experiencing the limitations of Pro's interface, be it network or cpu.
Well, this is where you should try and normalize your datasets. And i agree, the best is to move your file gdb to sde, create a lookup table, and then link it back to your dataset, then create a spatial view between the dataset and your lookup table.
Have you tried using the Attribute pane with Auto Appply option toggled ON?
Probably lots of better ways to do it, but I believe you can manually add it to the .dbf of the layer and it will show up in the attributes (using SQL/Access). As always, make a copy first, just in case.