Bambu Lab X1 X1C MQTT

Tried to play with python script for ftps on P1P and have some progress:

1 Like

Great news! I know some people were looking into automations based on layer number for P1P so now they can. This also means itā€™s possibly easier to detect layer information and count swaps with layers, might look into that again.

Anything new with the ftps on the P1P? It would be great if it were reliable to use the FTPS flow, used to be a hit or miss.

Could you provide result of ftps script on X1C? Managed to get something like this and it could be helpfull to compare result.

image

Doing an nlist * of all files gets me:

[
	"FSCK0000.REC",
	"FSCK0001.REC",
	"FSCK0002.REC",
	"PETG+_filament_sample.gcode.3mf",
	"System Volume Information",
	"cache", // Also has some 3mf's in it
	"export",
	"ipcam",
	"timelapse",
	"verify_job"
]

So result of script is the same. Unfortunetly Iā€™m still getting node-red crash after trying to fetch it wiith flow :confused:

I assume you already tried some of the fixes? If youā€™re running the addon for nodered, it may crash if you donā€™t have python3 installed as a system package for the addon. Additionally I had this problem before too when in my dockerized nodered, my ā€œpathā€ to python3 (just python3) was not set properly - it would crash and restart my whole nodered when it ran.

You could test this by just having an empty python node return the msg and force that to run with an inject, if it still crashes itā€™s likely just a setup issue.

1 Like

In case of multiple reinstallation of node-red I missed python3 package. Damn I have really short memoryā€¦

First step behind:

And second step behind but have no clue whats next to fix :smiley:

1 Like

From that second image Iā€™m not quite sure whatā€™s wrong. Both python nodes appear successful, and it progressed to getting the gcode file from HA, so it tried to get the plate number used from the gcode file (it is based off regex extracting the number from like plate_1.gcode). However after that I see it did not go to the printer name node from HA so it didnā€™t publish the image into MQTT. So the issue is somewhere at or after ā€œGet Plateā€ nodeā€¦ Any errors come up in nodered debug sidebar?

I think ā€œGet Plateā€ is problematic but thereā€™s no errors in debug. Every node after ā€œGet Plateā€ isnā€™t triggered.
I would like to clarify something to myself. /data/fetched folder should be created in /config/node-red folder in terms of node red in addon, right? Iā€™m asking because after fetching file I still see no files in this directory.

Iā€™m not entirely sure where it is in the HomeAssistant with NR Addon setup - but that folder is relative to the nodered instance. Iā€™m not sure if it by default views the whole HA instance as the same file structure, or if it also treats it like a docker instance with it being some internal directory.

Iā€™m surprised if any of the nodes after Get Plate just fail, as that node does not have any ifā€™s in it and will just send the msg after parsing the data.

Toss a few debug nodes that have it set to ā€œfull msgā€ instead of just payload after the Get Plate node and some others and run a ā€œForce Fetchā€ by clicking the both on that inject node. See if anything strange comes up.

Something like this?

Yep and I can see the problemā€¦ I have no clue why the plate name and gcode_file are ā€œPlate 1.3mfā€ā€¦ that has always ended up being ā€œMetadata/plate_1.gcodeā€ on x1c and p1p for the longest timesā€¦ if it changed, will need to rework logic on how it gets fetchedā€¦ weird.

Did you name it ā€œPlate 1.3mfā€ or think that was auto done? Because if it was done automatically it might still be reliably parsed, just needs some code to change in ā€œGet Plateā€. Iā€™m currently working on a major overhaul of all the flows to make it all cleaner and that will take some time, but will definitely include the fix in that. But for now, we can try to come up with some quick code fixes to manually edit your instance.

Might be janky for now but adding this inside the ā€œGet Plateā€ node after the replace for ā€œ.gcodeā€ may work.

plate_name = plate_name.replace(".3mf", "");
plate_name = plate_name.replace("Plate ", "plate_");

Then the rest of the code should process as expected in that node. I think.

I needed to print another plate to check what will see with different naming, added your suggested code (thingking that could work only with ā€œPlate 1.3mfā€) but without success.

Edit: yep, works only with that naming but works and half of the day is saved :smiley:

As I understand now always ā€œplate_nameā€ should be ā€œplate_1ā€

Edit2: correct: ā€œplate_1ā€ works for any file name

Hereā€™s my flow for this. A bit mess but you can see what I changed in python scripts to work with P1P.

Yeah the gcode file has usually always been plate_#.gcode, and that was used to fetch both the info for the plate in the json as well as the correct png image. So seeing ā€œPlate 1.3mfā€ threw me in for a loop. So I was suggesting trying to rename whatever name that had in an attempt to still get the number ID of the plate.

Also would suggest removing that from GDrive in case the flow json had your access code or something in it, and just pasting the contents of ā€œGet Plateā€ function

Access code is removed from flow. Thank you for your help with this.
ā€œGet Plateā€ isnā€™t enough to pass flow on P1P. Both PY should be changed a bit either.

Oh okay, Iā€™ll take a look in a bit. Do you have a quick summary of what had to change in the python flows? nodered does weird formatting when you export it, so having just the function contents would be easier to read :slight_smile:

I just paste here only differences regarding your code.

Stock "List 3MF Files (Py)

li = ftps.nlst("*.3mf")
li2 = ftps.nlst("/cache")

li = li + li2 
msg["files"] = li
ftps.close()
return msg

My "List 3MF Files (Py)

li = ftps.nlst()
li2 = ftps.nlst("/cache")
li3 = li + li2

list_files = li3
list_3mf = ['.3mf']

result = []
for phrase in list_files:
    if any(name in phrase for name in list_3mf):
        result.append(phrase)

ftps.close()

msg["files"] = result
return msg

Using listing ā€œ*.3mfā€ will resolve permision danied.

Stock "Fetch 3MF File (Py):

path = "/data/fetched/Bambu_Lab_P1P"
isExist = os.path.exists(path)
if not isExist:
   os.makedirs(path)

localFileName = path + "/current_print.3mf"

with open(localFileName, 'wb') as f:
    ftps.retrbinary('RETR ' + msg["payload"], f.write)

msg["localFilename"] = localFileName
ftps.close()
return msg

My "Fetch 3MF File (Py):

localFileName = "/data/fetched/Bambu_Lab_P1P/current_print.3mf"

with open(localFileName, 'wb') as f:
    ftps.retrbinary('RETR /cache' + msg["payload"], f.write)

msg["localFilename"] = localFileName
ftps.close()
return msg

ftps.retribinary should use ā€œ/cacheā€ folder because of storing sended files in there.

Stock ā€œGet Plateā€:

let plate_name = msg.gcode_file;
plate_name = plate_name.replace("/data/Metadata/","")
plate_name = plate_name.replace(".gcode", "")
let plate_id = parseInt(plate_name.replace("plate_", ""));

msg.plate_name = plate_name;
msg.plate_id = plate_id;
node.send(msg);

My ā€œGet Plateā€:

let plate_name = msg.gcode_file;
let plate_id = parseInt(plate_name.replace("plate_", ""));

msg.plate_name = "plate_1";
msg.plate_id = plate_id;
node.send(msg);

Edit: In my code thereā€™s still problem with remote print in lack of ā€œ/cache/ā€ directory before filename but works from root and it works like that for now.

@WolfwithSword Have you noticed the behavior of the printer that it cannot push next remote print after the previous print is finished? Or is it just my problem?

Hm I guess the P1P decides to limit FTPS functionality in terms of commands. What I will probably do is put an ā€˜ifā€™ for the model being P1P or not, and if itā€™s a P1P then it will do two FTPS lists, one of root and one of cache folder, of which it will then merge any that end in ā€˜3mfā€™.

The fetch wonā€™t be needed to modify then as it should then properly have root or cache folders. Get plate I will probably keep my version of the modification - I have no clue why the name on yours ended up as ā€œPlate 1.3mfā€ but assuming that is normal now for P1P, what I had should resolve it and still give the correct number (hardcoding plate_1 wonā€™t work for multi plate prints)

As for the issue you mentioned, not sure, might be P1P new firmware related? I donā€™t have a P1P but with my X1C I donā€™t have issues.

1 Like

Did this change to properly list 3mfs:

li = ftps.nlst()

list_files = li
list_3mf = ['.3mf']

result_root = []
for phrase in list_files:
    if any(name in phrase for name in list_3mf):
        result_root.append(phrase)

li2 = ftps.nlst("/cache")

list_files_cache = li2
list_3mf_cache = ['.3mf']

result_cache = []
for phrase in list_files_cache:
    if any(name in phrase for name in list_3mf_cache):
        result_cache.append("/cache/"+phrase)

li3 = result_root + result_cache

ftps.close()

msg["files"] = li3
return msg

And without modification fetch stills tried to get file only from root. P1P is sending file only to cache folder so fetch canā€™t find them in root and script keep crashing.

Edit: Did something wrong in restocking fetch code. You are right, thereā€™s no need to modify fetch.