Tried to play with python script for ftps on P1P and have some progress:
Great news! I know some people were looking into automations based on layer number for P1P so now they can. This also means itās possibly easier to detect layer information and count swaps with layers, might look into that again.
Anything new with the ftps on the P1P? It would be great if it were reliable to use the FTPS flow, used to be a hit or miss.
Could you provide result of ftps script on X1C? Managed to get something like this and it could be helpfull to compare result.
Doing an nlist * of all files gets me:
[
"FSCK0000.REC",
"FSCK0001.REC",
"FSCK0002.REC",
"PETG+_filament_sample.gcode.3mf",
"System Volume Information",
"cache", // Also has some 3mf's in it
"export",
"ipcam",
"timelapse",
"verify_job"
]
So result of script is the same. Unfortunetly Iām still getting node-red crash after trying to fetch it wiith flow
I assume you already tried some of the fixes? If youāre running the addon for nodered, it may crash if you donāt have python3 installed as a system package for the addon. Additionally I had this problem before too when in my dockerized nodered, my āpathā to python3 (just python3) was not set properly - it would crash and restart my whole nodered when it ran.
You could test this by just having an empty python node return the msg and force that to run with an inject, if it still crashes itās likely just a setup issue.
In case of multiple reinstallation of node-red I missed python3 package. Damn I have really short memoryā¦
First step behind:
And second step behind but have no clue whats next to fix
From that second image Iām not quite sure whatās wrong. Both python nodes appear successful, and it progressed to getting the gcode file from HA, so it tried to get the plate number used from the gcode file (it is based off regex extracting the number from like plate_1.gcode). However after that I see it did not go to the printer name node from HA so it didnāt publish the image into MQTT. So the issue is somewhere at or after āGet Plateā nodeā¦ Any errors come up in nodered debug sidebar?
I think āGet Plateā is problematic but thereās no errors in debug. Every node after āGet Plateā isnāt triggered.
I would like to clarify something to myself. /data/fetched folder should be created in /config/node-red folder in terms of node red in addon, right? Iām asking because after fetching file I still see no files in this directory.
Iām not entirely sure where it is in the HomeAssistant with NR Addon setup - but that folder is relative to the nodered instance. Iām not sure if it by default views the whole HA instance as the same file structure, or if it also treats it like a docker instance with it being some internal directory.
Iām surprised if any of the nodes after Get Plate just fail, as that node does not have any ifās in it and will just send the msg after parsing the data.
Toss a few debug nodes that have it set to āfull msgā instead of just payload after the Get Plate node and some others and run a āForce Fetchā by clicking the both on that inject node. See if anything strange comes up.
Yep and I can see the problemā¦ I have no clue why the plate name and gcode_file are āPlate 1.3mfāā¦ that has always ended up being āMetadata/plate_1.gcodeā on x1c and p1p for the longest timesā¦ if it changed, will need to rework logic on how it gets fetchedā¦ weird.
Did you name it āPlate 1.3mfā or think that was auto done? Because if it was done automatically it might still be reliably parsed, just needs some code to change in āGet Plateā. Iām currently working on a major overhaul of all the flows to make it all cleaner and that will take some time, but will definitely include the fix in that. But for now, we can try to come up with some quick code fixes to manually edit your instance.
Might be janky for now but adding this inside the āGet Plateā node after the replace for ā.gcodeā may work.
plate_name = plate_name.replace(".3mf", "");
plate_name = plate_name.replace("Plate ", "plate_");
Then the rest of the code should process as expected in that node. I think.
I needed to print another plate to check what will see with different naming, added your suggested code (thingking that could work only with āPlate 1.3mfā) but without success.
Edit: yep, works only with that naming but works and half of the day is saved
As I understand now always āplate_nameā should be āplate_1ā
Edit2: correct: āplate_1ā works for any file name
Hereās my flow for this. A bit mess but you can see what I changed in python scripts to work with P1P.
Yeah the gcode file has usually always been plate_#.gcode, and that was used to fetch both the info for the plate in the json as well as the correct png image. So seeing āPlate 1.3mfā threw me in for a loop. So I was suggesting trying to rename whatever name that had in an attempt to still get the number ID of the plate.
Also would suggest removing that from GDrive in case the flow json had your access code or something in it, and just pasting the contents of āGet Plateā function
Access code is removed from flow. Thank you for your help with this.
āGet Plateā isnāt enough to pass flow on P1P. Both PY should be changed a bit either.
Oh okay, Iāll take a look in a bit. Do you have a quick summary of what had to change in the python flows? nodered does weird formatting when you export it, so having just the function contents would be easier to read
I just paste here only differences regarding your code.
Stock "List 3MF Files (Py)
li = ftps.nlst("*.3mf")
li2 = ftps.nlst("/cache")
li = li + li2
msg["files"] = li
ftps.close()
return msg
My "List 3MF Files (Py)
li = ftps.nlst()
li2 = ftps.nlst("/cache")
li3 = li + li2
list_files = li3
list_3mf = ['.3mf']
result = []
for phrase in list_files:
if any(name in phrase for name in list_3mf):
result.append(phrase)
ftps.close()
msg["files"] = result
return msg
Using listing ā*.3mfā will resolve permision danied.
Stock "Fetch 3MF File (Py):
path = "/data/fetched/Bambu_Lab_P1P"
isExist = os.path.exists(path)
if not isExist:
os.makedirs(path)
localFileName = path + "/current_print.3mf"
with open(localFileName, 'wb') as f:
ftps.retrbinary('RETR ' + msg["payload"], f.write)
msg["localFilename"] = localFileName
ftps.close()
return msg
My "Fetch 3MF File (Py):
localFileName = "/data/fetched/Bambu_Lab_P1P/current_print.3mf"
with open(localFileName, 'wb') as f:
ftps.retrbinary('RETR /cache' + msg["payload"], f.write)
msg["localFilename"] = localFileName
ftps.close()
return msg
ftps.retribinary should use ā/cacheā folder because of storing sended files in there.
Stock āGet Plateā:
let plate_name = msg.gcode_file;
plate_name = plate_name.replace("/data/Metadata/","")
plate_name = plate_name.replace(".gcode", "")
let plate_id = parseInt(plate_name.replace("plate_", ""));
msg.plate_name = plate_name;
msg.plate_id = plate_id;
node.send(msg);
My āGet Plateā:
let plate_name = msg.gcode_file;
let plate_id = parseInt(plate_name.replace("plate_", ""));
msg.plate_name = "plate_1";
msg.plate_id = plate_id;
node.send(msg);
Edit: In my code thereās still problem with remote print in lack of ā/cache/ā directory before filename but works from root and it works like that for now.
@WolfwithSword Have you noticed the behavior of the printer that it cannot push next remote print after the previous print is finished? Or is it just my problem?
Hm I guess the P1P decides to limit FTPS functionality in terms of commands. What I will probably do is put an āifā for the model being P1P or not, and if itās a P1P then it will do two FTPS lists, one of root and one of cache folder, of which it will then merge any that end in ā3mfā.
The fetch wonāt be needed to modify then as it should then properly have root or cache folders. Get plate I will probably keep my version of the modification - I have no clue why the name on yours ended up as āPlate 1.3mfā but assuming that is normal now for P1P, what I had should resolve it and still give the correct number (hardcoding plate_1 wonāt work for multi plate prints)
As for the issue you mentioned, not sure, might be P1P new firmware related? I donāt have a P1P but with my X1C I donāt have issues.
Did this change to properly list 3mfs:
li = ftps.nlst()
list_files = li
list_3mf = ['.3mf']
result_root = []
for phrase in list_files:
if any(name in phrase for name in list_3mf):
result_root.append(phrase)
li2 = ftps.nlst("/cache")
list_files_cache = li2
list_3mf_cache = ['.3mf']
result_cache = []
for phrase in list_files_cache:
if any(name in phrase for name in list_3mf_cache):
result_cache.append("/cache/"+phrase)
li3 = result_root + result_cache
ftps.close()
msg["files"] = li3
return msg
And without modification fetch stills tried to get file only from root. P1P is sending file only to cache folder so fetch canāt find them in root and script keep crashing.
Edit: Did something wrong in restocking fetch code. You are right, thereās no need to modify fetch.