Create all API server routes

This commit is contained in:
SavaletDev
2022-06-26 18:36:36 +02:00
parent d9b5f3eb8c
commit fae25e7448
1876 changed files with 2372 additions and 380683 deletions

2
.gitignore vendored
View File

@@ -1 +1 @@
node_modules
api/node_modules

View File

@@ -5072,3 +5072,507 @@
[2022-06-19 20:05:04] [DEBUG] GET from : undefined, undefined
[2022-06-19 20:27:58] [DEBUG] GET from : undefined, undefined
[2022-06-19 20:53:45] [DEBUG] GET from : undefined, undefined
[2022-06-26 17:26:57]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 17:26:57] [INFO] Database succefull connected ! (13808)
[2022-06-26 17:27:10]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 17:27:10] [INFO] Database succefull connected ! (13813)
[2022-06-26 17:27:10] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 17:27:12] [DEBUG] GET from : undefined, undefined
[2022-06-26 17:27:23] [object Object]
[2022-06-26 17:27:24] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 17:29:47]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 17:29:47] [INFO] Database succefull connected ! (13854)
[2022-06-26 17:29:47] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 17:29:48] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 17:29:48] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 17:29:51] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 17:43:19]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 17:43:19] [INFO] Database succefull connected ! (13916)
[2022-06-26 17:43:19] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 17:44:00]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 17:44:00] [INFO] Database succefull connected ! (13919)
[2022-06-26 17:44:00] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 17:44:27]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 17:44:27] [INFO] Database succefull connected ! (13923)
[2022-06-26 17:44:27] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 17:44:41]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 17:44:41] [INFO] Database succefull connected ! (13924)
[2022-06-26 17:44:41] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 17:45:18]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 17:45:18] [INFO] Database succefull connected ! (13928)
[2022-06-26 17:45:18] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 17:56:10] undefined
[2022-06-26 17:56:10]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 17:56:10] [INFO] Database succefull connected ! (13985)
[2022-06-26 17:56:10] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 17:56:46]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 17:56:46] [INFO] Database succefull connected ! (13986)
[2022-06-26 17:56:46] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 17:56:48] [DEBUG] GET from : 0, undefined
[2022-06-26 17:56:51] [DEBUG] GET from : 0, undefined
[2022-06-26 17:58:33]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 17:58:33] [INFO] Database succefull connected ! (13994)
[2022-06-26 17:58:33] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 17:58:43]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 17:58:43] [INFO] Database succefull connected ! (13995)
[2022-06-26 17:58:43] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 17:58:44] [DEBUG] GET from : undefined, undefined
[2022-06-26 17:58:45] [DEBUG] GET from : undefined, undefined
[2022-06-26 17:58:49] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 18:02:23] [DEBUG] GET from : undefined, undefined
[2022-06-26 18:05:55]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 18:05:55] [INFO] Database succefull connected ! (14019)
[2022-06-26 18:05:55] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 18:14:32]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 18:14:32] [INFO] Database succefull connected ! (14068)
[2022-06-26 18:16:16]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 18:16:16] [INFO] Database succefull connected ! (14078)
[2022-06-26 18:16:41]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 18:16:41] [INFO] Database succefull connected ! (14080)
[2022-06-26 18:17:49]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 18:17:49] [INFO] Database succefull connected ! (14088)
[2022-06-26 18:17:49] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 18:17:51] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 18:17:53] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 18:17:56] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 18:18:00] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 18:18:33]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 18:18:33] [INFO] Database succefull connected ! (14096)
[2022-06-26 18:18:33] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 18:19:00]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 18:19:00] [INFO] Database succefull connected ! (14099)
[2022-06-26 18:19:01] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 18:19:06] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 18:19:14] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 18:19:22] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 18:19:25] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 18:19:28] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 18:21:00]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 18:21:00] [INFO] Database succefull connected ! (14110)
[2022-06-26 18:21:00] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 18:25:25]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 18:25:25] [INFO] Database succefull connected ! (14128)
[2022-06-26 18:25:25] ouiuiuiui
[2022-06-26 18:25:58]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 18:25:58] [INFO] Database succefull connected ! (14129)
[2022-06-26 18:25:58] ouiuiuiui
[2022-06-26 18:25:58] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 18:26:10]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 18:26:10] [INFO] Database succefull connected ! (14133)
[2022-06-26 18:26:10] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 18:27:07]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 18:27:07] [INFO] Database succefull connected ! (14137)
[2022-06-26 18:27:07] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 18:27:13] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 18:27:37]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 18:27:37] [INFO] Database succefull connected ! (14139)
[2022-06-26 18:27:37] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 18:28:04]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 18:28:04] [INFO] Database succefull connected ! (14143)
[2022-06-26 18:28:04] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 18:28:16]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 18:28:16] [INFO] Database succefull connected ! (14144)
[2022-06-26 18:28:16] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 18:28:49]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 18:28:49] [INFO] Database succefull connected ! (14145)
[2022-06-26 18:28:58]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 18:28:58] [INFO] Database succefull connected ! (14146)
[2022-06-26 18:29:04]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 18:29:04] [INFO] Database succefull connected ! (14150)
[2022-06-26 18:29:04] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 18:29:14]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 18:29:14] [INFO] Database succefull connected ! (14151)
[2022-06-26 18:29:14] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 18:29:25] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 18:30:40]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 18:30:40] [INFO] Database succefull connected ! (14158)
[2022-06-26 18:30:43]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 18:30:43] [INFO] Database succefull connected ! (14159)
[2022-06-26 18:30:43] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 18:30:57]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 18:30:57] [INFO] Database succefull connected ! (14160)
[2022-06-26 18:30:57] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 18:31:57]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 18:31:57] [INFO] Database succefull connected ! (14164)
[2022-06-26 18:31:57] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 18:31:58] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 18:32:03] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 18:32:03] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 18:32:12]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 18:32:12] [INFO] Database succefull connected ! (14169)
[2022-06-26 18:32:21]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 18:32:21] [INFO] Database succefull connected ! (14170)
[2022-06-26 18:32:21] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 18:32:22] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 18:32:22] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 18:32:27] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 18:32:30] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 18:32:30] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 18:32:34] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 18:33:11]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 18:33:11] [INFO] Database succefull connected ! (14174)
[2022-06-26 18:33:11] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 18:33:14] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 18:35:17]
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝
[2022-06-26 18:35:17] [INFO] Database succefull connected ! (14181)
[2022-06-26 18:35:17] [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !
[2022-06-26 18:35:19] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 18:35:19] [DEBUG] Product [object Undefined] created !
[2022-06-26 18:35:19] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 18:35:19] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 18:35:22] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 18:35:22] [DEBUG] Product [object Undefined] deleted !
[2022-06-26 18:35:22] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 18:35:22] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586
[2022-06-26 18:35:25] [DEBUG] GET from : undefined, 0a6d6d0f-07e8-436c-bf18-1c6cbf795586

File diff suppressed because it is too large Load Diff

View File

@@ -0,0 +1,28 @@
var router = require('express').Router();
const server = require('../server.js')
var jsonParser = server.parser.json()
router.post('', jsonParser, function (req, res) {
ipInfo = server.ip(req);
server.logger(' [DEBUG] GET from : ' + ipInfo.clientIp.split("::ffff:")[1] + `, ${req.query.uuid}`)
var sql = `SELECT token FROM users WHERE uuid = '${req.query.uuid}'`;
server.con.query(sql, function (err, result) {
if (err) {server.logger(" [ERROR] Database error\n " + err)};
if (result.length == 0) {
return res.json({'error': true, 'code': 404})
} else {
if (result[0].token === req.query.token) {
var sql = `INSERT INTO mc_products (id, name, description, price, cpu, cpu_pinning, ram, disk, swap, io, egg, startup_command, env) VALUES('${server.crypto.randomBytes(3).toString('hex')}', '${req.body.name}', '${req.body.description}', '${req.body.price}', '${req.body.cpu}', '${req.body.cpu_pinning}', '${req.body.ram}', '${req.body.disk}', '${req.body.swap}', '${req.body.io}', '${req.body.egg}', '${req.body.startup_command}', '${req.body.env}')`;
server.con.query(sql, function (err, result) {
if (err) {server.logger(" [ERROR] Database error\n " + err)};
});
server.logger(" [DEBUG] Product " + toString(req.body.name) + " created !")
return res.json({"error": false, "response": "OK"});
} else {
return res.json({'error': true, 'code': 403})
}
}
});
})
module.exports = router;

16
api/routes/create-user.js Normal file
View File

@@ -0,0 +1,16 @@
var router = require('express').Router();
const server = require('../server.js')
var jsonParser = server.parser.json()
router.post('/api/create-user', jsonParser, function (req, res) {
bcrypt.hash(req.body.password, 10, function(err, hash) {
var sql = `INSERT INTO users (uuid, username, mail, token, password, balance, tickets, services, suspend_services, alerts) VALUES('${uuid.v4()}', '${req.body.username}', '${req.body.mail.toLowerCase()}', '${server.crypto.randomBytes(20).toString('hex')}', '${hash}', 0, 0, 0, 0, 0)`;
server.con.query(sql, function (err, result) {
if (err) {server.logger(" [ERROR] Database error\n " + err)};
});
});
server.logger(" [INFO] User " + toString(req.body.username) + " created !")
return res.json({"response": "OK"})
})
module.exports = router;

View File

@@ -0,0 +1,30 @@
var router = require('express').Router();
const server = require('../server.js')
var jsonParser = server.parser.json()
router.delete('', jsonParser, function (req, res) {
ipInfo = server.ip(req);
var response = "OK"
var error = false
server.logger(' [DEBUG] GET from : ' + ipInfo.clientIp.split("::ffff:")[1] + `, ${req.query.uuid}`)
var sql = `SELECT token FROM users WHERE uuid = '${req.query.uuid}'`;
server.con.query(sql, function (err, result) {
if (err) {server.logger(" [ERROR] Database error\n " + err)};
if (result.length == 0) {
res.json({'error': true, 'code': 404})
} else {
if (result[0].token === req.query.token) {
var sql = `DELETE FROM mc_products WHERE id='${req.body.id}'`;
server.con.query(sql, function (err, result) {
if (err) {server.logger(" [ERROR] Database error\n " + err), error = true, response = "Database error"};
});
server.logger(" [DEBUG] Product " + toString(req.body.id) + " deleted !")
return res.json({"error": error, "response": response});
} else {
res.json({'error': true, 'code': 403})
}
}
});
})
module.exports = router;

6
api/routes/example.js Normal file
View File

@@ -0,0 +1,6 @@
var router = require('express').Router();
const server = require('../server.js')
module.exports = router;

70
api/routes/index.js Normal file
View File

@@ -0,0 +1,70 @@
var router = require('express').Router();
const server = require('../server.js')
router.get('', (req, res) => {
ipInfo = server.ip(req);
server.logger(' [DEBUG] GET from : ' + ipInfo.clientIp.split("::ffff:")[1] + `, ${req.query.uuid}`)
var sql = `SELECT token FROM users WHERE uuid = '${req.query.uuid}'`;
server.con.query(sql, function (err, result) {
if (err) {logger(" [ERROR] Database error\n " + err)};
if (result.length == 0) {
return res.json({'error': true, 'code': 404})
} else {
if (result[0].token === req.query.token) {
var activity = []
activity.push({
"name": "Maintenance Serveur Epsilon",
"date": "17 FEV 15:59"
})
activity.push({
"name": "Maintenance réseau",
"date": "11 JUL 8:10"
})
activity.push({
"name": "Maintenance DNS",
"date": "15 JUN 11:00"
})
return res.json(
{
"error": false,
"username": "Savalet",
"stats_array": {
"CPU": [15, 5, 25, 86, 45, 66, 15],
"RAM": [72, 96, 56, 60, 74, 60, 78]
},
"counters": [58.6 + '€', 68.5 + '€', 16, 3, 0, 0],
"activity": activity,
"invoices_table": [
{
"name": "Paiement par mois VPS5",
"date": "18/03/2022",
"price": 185.25,
"status": "Terminé"
},
{
"name": "Developpement site web",
"date": "22/02/2022",
"price": 18.80,
"status": "En Attente"
}, {
"name": "Paiement par mois DEDI1",
"date": "22/02/2022",
"price": 485.25,
"status": "Remboursé"
},
{
"name": "Paiement par mois VPS5",
"date": "18/02/2022",
"price": 185.25,
"status": "Terminé"
}
],
"get_ip": ipInfo.clientIp.split("::ffff:")[1]
});
} else {
return res.json({'error': true, 'code': 403})
}
}
});
});
module.exports = router;

28
api/routes/login-user.js Normal file
View File

@@ -0,0 +1,28 @@
var router = require('express').Router();
const server = require('../server.js')
var jsonParser = server.parser.json()
router.post('', jsonParser, function (req, res) {
var sql = `SELECT password FROM users WHERE mail = '${req.body.mail}'`;
server.con.query(sql, function (err, result) {
if (err) throw err;
if (result.length == 0) {
res.json({'error': true, 'code': 404})
} else {
bcrypt.compare(req.body.password, result[0].password, function(err, result) {
if (result === true) {
var sql = `SELECT * FROM users WHERE mail = '${req.body.mail}'`;
server.con.query(sql, function (err, result) {
if (err) {server.logger(" [ERROR] Database error\n " + err)};
return res.json({'error': false, 'uuid': result[0].uuid, 'token': result[0].token})
});
} else {
return res.json({'error': true, 'code': 403})
}
});
}
});
server.logger(req.body)
})
module.exports = router;

36
api/routes/mc-products.js Normal file
View File

@@ -0,0 +1,36 @@
var router = require('express').Router();
const server = require('../server.js')
router.get('', function (req, res) {
ipInfo = server.ip(req);
server.logger(' [DEBUG] GET from : ' + ipInfo.clientIp.split("::ffff:")[1] + `, ${req.query.uuid}`)
var sql = `SELECT token FROM users WHERE uuid = '${req.query.uuid}'`;
server.con.query(sql, function (err, result) {
if (err) {server.logger(" [ERROR] Database error\n " + err)};
if (result.length == 0) {
return res.json({'error': true, 'code': 404})
} else {
if (result[0].token === req.query.token) {
var sql = `SELECT * FROM mc_products`;
server.con.query(sql, function (err, result) {
if (err) {server.logger(" [ERROR] Database error\n " + err)};
products = []
for(var i= 0; i < result.length; i++)
{
products.push({
"id": result[i].id,
"name": result[i].name,
"description": result[i].description,
"price": result[i].price
})
}
return res.json({'error': false, 'products': products})
});
} else {
return res.json({'error': true, 'code': 403})
}
}
});
})
module.exports = router;

75
api/routes/order-form.js Normal file
View File

@@ -0,0 +1,75 @@
var router = require('express').Router();
const server = require('../server.js')
var jsonParser = server.parser.json()
router.post('/api/order-form', jsonParser, function (req, res) {
var sql = `SELECT token FROM users WHERE uuid = '${req.query.uuid}'`;
server.con.query(sql, function (err, result) {
if (err) {server.logger(" [ERROR] Database error\n " + err)};
if (result.length == 0) {
return res.json({'error': true, 'code': 404})
} else {
if (result[0].token === req.query.token) {
var sql = `SELECT * FROM mc_products WHERE id = '${req.body.product_id}'`;
server.con.query(sql, function (err, result) {
if (err) {server.logger(" [ERROR] Database error\n " + err)};
var docker_img = "ghcr.io/pterodactyl/yolks:java_17"
let data = {
'name': result[0].name + " " + req.body.order[0].srv_name + " (" + req.body.order[1].first_name + ")",
"user": 1,
"egg": parseInt(result[0].egg),
'docker_image': docker_img,
'startup': result[0].startup_command,
"limits": {
"memory": parseInt(result[0].ram),
"swap": parseInt(result[0].swap),
"disk": parseInt(result[0].disk),
"io": parseInt(result[0].io),
"cpu": parseInt(result[0].cpu)
},
"feature_limits": {
'databases': parseInt(req.body.order[0].db_sup),
'allocations': 0,
'backups': parseInt(req.body.order[0].bkp_sup),
},
"environment": JSON.parse(result[0].env),
"allocation": {
"default": 1,
"addtional": []
},
"deploy": {
"locations": [2],
"dedicated_ip": false,
"port_range": []
},
"start_on_completion": false,
"skip_scripts": false,
"oom_disabled": true
}
server.logger(JSON.stringify(data))
fetch("https://panel.mercurycloud.fr/api/application/servers", {
"method": "POST",
"headers": {
"Accept": "application/json",
"Content-Type": "application/json",
"Authorization": `Bearer ${pterodactyl_api_key}`,
},
"body": JSON.stringify(data)
}).then(response => console.log(response)).catch(err => console.error(err)).then(() => {
server.logger(" [DEBUG] New service !" + "\n Name : " + req.body.order[0].srv_name + "\n Owner first name : " + req.body.order[1].first_name + "\n Owner last name : " + req.body.order[1].last_name + "\n Owner mail : " + req.body.order[1].mail)
return res.json({"error": false, "response": "OK"});
})
});
} else {
return res.json({'error': true, 'code': 403})
}
}
})
})
module.exports = router;

84
api/server.js Normal file
View File

@@ -0,0 +1,84 @@
const https = require('https');
const express = require('express');
var getIP = require('ipware')().get_ip;
const fs = require('fs')
var crypto = require("crypto");
const uuid = require('uuid');
const fetch = require('cross-fetch');
const request = require('request');
const req = require('express/lib/request');
const bodyParser = require('body-parser')
const bcrypt = require('bcrypt')
const app = express();
function logger(msg) {
let date_ob = new Date();;
let date = ("0" + date_ob.getDate()).slice(-2);
let month = ("0" + (date_ob.getMonth() + 1)).slice(-2);
let year = date_ob.getFullYear();
let hours = date_ob.getHours();
let minutes = date_ob.getMinutes();
let seconds = date_ob.getSeconds();
if (seconds < 10) {seconds = "0" + seconds}
if (hours < 10) {hours = "0" + hours}
if (minutes < 10) {minutes = "0" + minutes}
console.log('[' + year + "-" + month + "-" + date + " " + hours + ":" + minutes + ":" + seconds + '] ' + msg)
fs.appendFileSync('latest.log', '[' + year + "-" + month + "-" + date + " " + hours + ":" + minutes + ":" + seconds + '] ' + msg + '\n')
}
const PORT = 400
var ipInfo = ""
const pterodactyl_api_key = "ptla_28BCpHTsEFDr80yyNU4WLsdkSbGwxnT5kqFuzEHjx81"
var mysql = require('mysql');
var connection = mysql.createConnection({
host : '192.168.20.22',
user : 'mercurycloud_api',
password : 'r6z14kKL2tFDaU6G',
database : 'mercurycloud_api'
});
logger(`
███╗ ███╗███████╗██████╗ ██████╗██╗ ██╗██████╗ ██╗ ██╗ ██████╗██╗ ██████╗ ██╗ ██╗██████╗ █████╗ ██████╗ ██╗
████╗ ████║██╔════╝██╔══██╗██╔════╝██║ ██║██╔══██╗╚██╗ ██╔╝ ██╔════╝██║ ██╔═══██╗██║ ██║██╔══██╗ ██╔══██╗██╔══██╗██║
██╔████╔██║█████╗ ██████╔╝██║ ██║ ██║██████╔╝ ╚████╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ███████║██████╔╝██║
██║╚██╔╝██║██╔══╝ ██╔══██╗██║ ██║ ██║██╔══██╗ ╚██╔╝ ██║ ██║ ██║ ██║██║ ██║██║ ██║ ██╔══██║██╔═══╝ ██║
██║ ╚═╝ ██║███████╗██║ ██║╚██████╗╚██████╔╝██║ ██║ ██║ ╚██████╗███████╗╚██████╔╝╚██████╔╝██████╔╝ ██║ ██║██║ ██║
╚═╝ ╚═╝╚══════╝╚═╝ ╚═╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝ ╚═╝ ╚═════╝╚══════╝ ╚═════╝ ╚═════╝ ╚═════╝ ╚═╝ ╚═╝╚═╝ ╚═╝`);
connection.connect(function(err) {
if (err) {
logger(` [ERROR] Database error !\n ${err.stack}`);
return;
}
logger(` [INFO] Database succefull connected ! (${connection.threadId})`);
app.use((req, res, next) => {
res.append('Access-Control-Allow-Origin', ['*']);
res.append('Access-Control-Allow-Methods', 'GET,PUT,POST,DELETE');
res.append('Access-Control-Allow-Headers', 'Content-Type');
next();
bodyParser.json();
});
app.use('/api/mc-products', require('./routes/mc-products.js'));
app.use('/', require('./routes/index.js'));
app.use('/api/order-form', require('./routes/order-form.js'));
app.use('/api/create-user', require('./routes/create-user.js'));
app.use('/api/login-user', require('./routes/login-user.js'));
app.use('/api/create-product', require('./routes/create-product.js'));
app.use('/api/delete-product', require('./routes/delete-product.js'));
app.listen(PORT, () =>
logger(` [INFO] MercuryCloud API listening on https://api.mercurycloud.fr/ !`)
);
});
exports.crypto = crypto
exports.parser = bodyParser
exports.logger = logger
exports.con = connection
exports.ip = getIP

1
node_modules/.bin/color-support generated vendored
View File

@@ -1 +0,0 @@
../color-support/bin.js

1
node_modules/.bin/mime generated vendored
View File

@@ -1 +0,0 @@
../mime/cli.js

1
node_modules/.bin/mkdirp generated vendored
View File

@@ -1 +0,0 @@
../mkdirp/bin/cmd.js

1
node_modules/.bin/node-pre-gyp generated vendored
View File

@@ -1 +0,0 @@
../@mapbox/node-pre-gyp/bin/node-pre-gyp

1
node_modules/.bin/nopt generated vendored
View File

@@ -1 +0,0 @@
../nopt/bin/nopt.js

1
node_modules/.bin/rimraf generated vendored
View File

@@ -1 +0,0 @@
../rimraf/bin.js

1
node_modules/.bin/semver generated vendored
View File

@@ -1 +0,0 @@
../semver/bin/semver.js

1
node_modules/.bin/sshpk-conv generated vendored
View File

@@ -1 +0,0 @@
../sshpk/bin/sshpk-conv

1
node_modules/.bin/sshpk-sign generated vendored
View File

@@ -1 +0,0 @@
../sshpk/bin/sshpk-sign

1
node_modules/.bin/sshpk-verify generated vendored
View File

@@ -1 +0,0 @@
../sshpk/bin/sshpk-verify

1
node_modules/.bin/uuid generated vendored
View File

@@ -1 +0,0 @@
../uuid/dist/bin/uuid

View File

@@ -1,21 +0,0 @@
MIT License
Copyright (c) 2021-2022 Devonte
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -1,68 +0,0 @@
<h1 align="center">PteroJS</h1>
<h3 align="center"><strong>A better API wrapper for Pterodactyl</strong></h3>
<p align="center"><img src="https://img.shields.io/badge/discord-invite-5865f2?style=for-the-badge&logo=discord&logoColor=white"> <img src="https://img.shields.io/badge/version-1.4.2-3572A5?style=for-the-badge"> <img src="https://img.shields.io/github/issues/devnote-dev/PteroJS.svg?style=for-the-badge"> <img src="https://img.shields.io/badge/docs-coming_soon-e67e22?style=for-the-badge"></p>
## About
PteroJS is a flexible API wrapper designed to give developers full access over the Pterodactyl API. The library uses a class-based management structure often seen in popular packages like Discord.js which keeps code clean, efficient and practical for any use-case.
## Installing
```
npm install @devnote-dev/pterojs
```
Please join the [support server](https://discord.gg/rmRw4W5XXz) if you experience package installation issues.
## Setting Up
PteroJS uses separate classes for the client and application sides of the Pterodactyl API.
### Using the application API
```js
const { PteroApp } = require('@devnote-dev/pterojs');
// Initialising the application
const client = new PteroApp('your.domain.here', 'pterodactyl_api_key');
// Accessing information
client.servers.fetch('evuk98yu').then(console.log);
```
### Using the client API
```js
const { PteroClient } = require('@devnote-dev/pterojs');
// Initialising the client
const client = new PteroClient(
'your.domain.here',
'pterodactyl_api_key',
{ ws: true }
);
// Adding the server to listen for
const shard = client.addSocksetServer('kgujg66h');
// Listening to events
shard.on('statusUpdate', status => {
console.log(`server ${shard.id} status: ${status}`);
});
// Connecting to the server
shard.connect();
```
## Contributing
Please see the [todo list](https://github.com/PteroPackages/PteroJS/blob/main/TODO.md) or [issues](https://github.com/PteroPackages/PteroJS/issues) section for contributing ideas. New ideas are also welcome.
1. [Fork this repo](https://github.com/PteroPackages/pterojs/fork)!
2. Make a branch from `main` (`git branch -b <new-feature>`)
3. Commit your changes (`git commit -am "..."`)
4. Open a PR here (`git push origin <new-feature>`)
## Contributors
* [Devonte](https://github.com/devnote-dev) - Owner, maintainer
* [Chelog](https://github.com/chelog) - Code contributor
* [Cain](https://github.com/cainthebest) - Code contributor
* [Zumo](https://github.com/ZumoDev) - Tester
* [Dino](https://github.com/DinoTheDevOfficial) - Tester
This repository is managed under the MIT license.
© 2021-2022 devnote-dev

View File

@@ -1,43 +0,0 @@
# PteroJS Todo
Want to contribute? Familiar with JS? You're already halfway there. Below are things that need to be done for various parts of the module. See [the contributing section](https://github.com/devnote-dev/PteroJS#contributing) to find out how to take part. 👍
## Application
- [X] Setup `connect()` function ([PteroApp](https://github.com/devnote-dev/PteroJS/blob/main/src/application/PteroApp.js#L33))
- [X] Implement 201 and 204 response handling ([RequestManager](https://github.com/devnote-dev/PteroJS/blob/main/src/application/managers/RequestManager.js))
- [X] Implement helper functions for all the managers
- [X] Create and implement `NestEggsManager`
## Client
- [X] Use typed `ClientOptions` for startup ([PteroClient](https://github.com/devnote-dev/PteroJS/blob/main/src/client/PteroClient.js#L13))
- [X] Setup `connect()` function ([PteroClient](https://github.com/devnote-dev/PteroJS/blob/main/src/client/PteroClient.js#L26))
- [X] Create and implement `WebsocketManager`
- [X] Rename endpoints in endpoints structure
- [X] Rewrite `ServerManager` with the correct server class ([ServerManager](https://github.com/devnote-dev/PteroJS/blob/main/src/client/managers/ServerManager.js))
- [X] Implement 201 and 204 response handling ([RequestManager](https://github.com/devnote-dev/PteroJS/blob/main/src/client/managers/RequestManager.js))
- [X] Implement helper functions for all the managers
- [X] Implement `ClientUser` required fetch on startup
- [ ] Document all functions
## Global Managers
- [X] Implement helper functions for all the managers
- [X] Create and implement all necessary submanagers ([Dashflo](https://dashflo.net/docs/api/pterodactyl/v1/#req_dc39cc65e67d47bd8fb37449a8559935))
- [X] Document all functions (resolved into others)
- [X] Switch `AllocationManager#cache` to maps
## Global Structures
- [X] Implement helper functions for all the structures
- [X] Figure out and implement a consistent management system for `Permissions` ([Permissions](https://github.com/devnote-dev/PteroJS/blob/main/src/structures/Permissions.js))
## Misc.
- [X] Add proper notes and annotations to JSDocs
- [X] Overall testing of the package (priority)
- [X] TypeScript support (`index.d.ts`)
- [X] Investigate incorrectly documented endpoints
- [X] Implement tests in `/tests` (or move from `/test`)
- [X] Remove deprecated `PteroUser#tfa`
- [X] Remove deprecated `Presets` util
## Feature Plans
- [ ] Optional webhook client
- [X] Node status client
- [ ] Data formatter interface client (may be updated to logging client)

View File

@@ -1,22 +0,0 @@
The MIT License (MIT)
Copyright (c) 2016 David Frank
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -1,590 +0,0 @@
node-fetch
==========
[![npm version][npm-image]][npm-url]
[![build status][travis-image]][travis-url]
[![coverage status][codecov-image]][codecov-url]
[![install size][install-size-image]][install-size-url]
[![Discord][discord-image]][discord-url]
A light-weight module that brings `window.fetch` to Node.js
(We are looking for [v2 maintainers and collaborators](https://github.com/bitinn/node-fetch/issues/567))
[![Backers][opencollective-image]][opencollective-url]
<!-- TOC -->
- [Motivation](#motivation)
- [Features](#features)
- [Difference from client-side fetch](#difference-from-client-side-fetch)
- [Installation](#installation)
- [Loading and configuring the module](#loading-and-configuring-the-module)
- [Common Usage](#common-usage)
- [Plain text or HTML](#plain-text-or-html)
- [JSON](#json)
- [Simple Post](#simple-post)
- [Post with JSON](#post-with-json)
- [Post with form parameters](#post-with-form-parameters)
- [Handling exceptions](#handling-exceptions)
- [Handling client and server errors](#handling-client-and-server-errors)
- [Advanced Usage](#advanced-usage)
- [Streams](#streams)
- [Buffer](#buffer)
- [Accessing Headers and other Meta data](#accessing-headers-and-other-meta-data)
- [Extract Set-Cookie Header](#extract-set-cookie-header)
- [Post data using a file stream](#post-data-using-a-file-stream)
- [Post with form-data (detect multipart)](#post-with-form-data-detect-multipart)
- [Request cancellation with AbortSignal](#request-cancellation-with-abortsignal)
- [API](#api)
- [fetch(url[, options])](#fetchurl-options)
- [Options](#options)
- [Class: Request](#class-request)
- [Class: Response](#class-response)
- [Class: Headers](#class-headers)
- [Interface: Body](#interface-body)
- [Class: FetchError](#class-fetcherror)
- [License](#license)
- [Acknowledgement](#acknowledgement)
<!-- /TOC -->
## Motivation
Instead of implementing `XMLHttpRequest` in Node.js to run browser-specific [Fetch polyfill](https://github.com/github/fetch), why not go from native `http` to `fetch` API directly? Hence, `node-fetch`, minimal code for a `window.fetch` compatible API on Node.js runtime.
See Matt Andrews' [isomorphic-fetch](https://github.com/matthew-andrews/isomorphic-fetch) or Leonardo Quixada's [cross-fetch](https://github.com/lquixada/cross-fetch) for isomorphic usage (exports `node-fetch` for server-side, `whatwg-fetch` for client-side).
## Features
- Stay consistent with `window.fetch` API.
- Make conscious trade-off when following [WHATWG fetch spec][whatwg-fetch] and [stream spec](https://streams.spec.whatwg.org/) implementation details, document known differences.
- Use native promise but allow substituting it with [insert your favorite promise library].
- Use native Node streams for body on both request and response.
- Decode content encoding (gzip/deflate) properly and convert string output (such as `res.text()` and `res.json()`) to UTF-8 automatically.
- Useful extensions such as timeout, redirect limit, response size limit, [explicit errors](ERROR-HANDLING.md) for troubleshooting.
## Difference from client-side fetch
- See [Known Differences](LIMITS.md) for details.
- If you happen to use a missing feature that `window.fetch` offers, feel free to open an issue.
- Pull requests are welcomed too!
## Installation
Current stable release (`2.x`)
```sh
$ npm install node-fetch
```
## Loading and configuring the module
We suggest you load the module via `require` until the stabilization of ES modules in node:
```js
const fetch = require('node-fetch');
```
If you are using a Promise library other than native, set it through `fetch.Promise`:
```js
const Bluebird = require('bluebird');
fetch.Promise = Bluebird;
```
## Common Usage
NOTE: The documentation below is up-to-date with `2.x` releases; see the [`1.x` readme](https://github.com/bitinn/node-fetch/blob/1.x/README.md), [changelog](https://github.com/bitinn/node-fetch/blob/1.x/CHANGELOG.md) and [2.x upgrade guide](UPGRADE-GUIDE.md) for the differences.
#### Plain text or HTML
```js
fetch('https://github.com/')
.then(res => res.text())
.then(body => console.log(body));
```
#### JSON
```js
fetch('https://api.github.com/users/github')
.then(res => res.json())
.then(json => console.log(json));
```
#### Simple Post
```js
fetch('https://httpbin.org/post', { method: 'POST', body: 'a=1' })
.then(res => res.json()) // expecting a json response
.then(json => console.log(json));
```
#### Post with JSON
```js
const body = { a: 1 };
fetch('https://httpbin.org/post', {
method: 'post',
body: JSON.stringify(body),
headers: { 'Content-Type': 'application/json' },
})
.then(res => res.json())
.then(json => console.log(json));
```
#### Post with form parameters
`URLSearchParams` is available in Node.js as of v7.5.0. See [official documentation](https://nodejs.org/api/url.html#url_class_urlsearchparams) for more usage methods.
NOTE: The `Content-Type` header is only set automatically to `x-www-form-urlencoded` when an instance of `URLSearchParams` is given as such:
```js
const { URLSearchParams } = require('url');
const params = new URLSearchParams();
params.append('a', 1);
fetch('https://httpbin.org/post', { method: 'POST', body: params })
.then(res => res.json())
.then(json => console.log(json));
```
#### Handling exceptions
NOTE: 3xx-5xx responses are *NOT* exceptions and should be handled in `then()`; see the next section for more information.
Adding a catch to the fetch promise chain will catch *all* exceptions, such as errors originating from node core libraries, network errors and operational errors, which are instances of FetchError. See the [error handling document](ERROR-HANDLING.md) for more details.
```js
fetch('https://domain.invalid/')
.catch(err => console.error(err));
```
#### Handling client and server errors
It is common to create a helper function to check that the response contains no client (4xx) or server (5xx) error responses:
```js
function checkStatus(res) {
if (res.ok) { // res.status >= 200 && res.status < 300
return res;
} else {
throw MyCustomError(res.statusText);
}
}
fetch('https://httpbin.org/status/400')
.then(checkStatus)
.then(res => console.log('will not get here...'))
```
## Advanced Usage
#### Streams
The "Node.js way" is to use streams when possible:
```js
fetch('https://assets-cdn.github.com/images/modules/logos_page/Octocat.png')
.then(res => {
const dest = fs.createWriteStream('./octocat.png');
res.body.pipe(dest);
});
```
#### Buffer
If you prefer to cache binary data in full, use buffer(). (NOTE: `buffer()` is a `node-fetch`-only API)
```js
const fileType = require('file-type');
fetch('https://assets-cdn.github.com/images/modules/logos_page/Octocat.png')
.then(res => res.buffer())
.then(buffer => fileType(buffer))
.then(type => { /* ... */ });
```
#### Accessing Headers and other Meta data
```js
fetch('https://github.com/')
.then(res => {
console.log(res.ok);
console.log(res.status);
console.log(res.statusText);
console.log(res.headers.raw());
console.log(res.headers.get('content-type'));
});
```
#### Extract Set-Cookie Header
Unlike browsers, you can access raw `Set-Cookie` headers manually using `Headers.raw()`. This is a `node-fetch` only API.
```js
fetch(url).then(res => {
// returns an array of values, instead of a string of comma-separated values
console.log(res.headers.raw()['set-cookie']);
});
```
#### Post data using a file stream
```js
const { createReadStream } = require('fs');
const stream = createReadStream('input.txt');
fetch('https://httpbin.org/post', { method: 'POST', body: stream })
.then(res => res.json())
.then(json => console.log(json));
```
#### Post with form-data (detect multipart)
```js
const FormData = require('form-data');
const form = new FormData();
form.append('a', 1);
fetch('https://httpbin.org/post', { method: 'POST', body: form })
.then(res => res.json())
.then(json => console.log(json));
// OR, using custom headers
// NOTE: getHeaders() is non-standard API
const form = new FormData();
form.append('a', 1);
const options = {
method: 'POST',
body: form,
headers: form.getHeaders()
}
fetch('https://httpbin.org/post', options)
.then(res => res.json())
.then(json => console.log(json));
```
#### Request cancellation with AbortSignal
> NOTE: You may cancel streamed requests only on Node >= v8.0.0
You may cancel requests with `AbortController`. A suggested implementation is [`abort-controller`](https://www.npmjs.com/package/abort-controller).
An example of timing out a request after 150ms could be achieved as the following:
```js
import AbortController from 'abort-controller';
const controller = new AbortController();
const timeout = setTimeout(
() => { controller.abort(); },
150,
);
fetch(url, { signal: controller.signal })
.then(res => res.json())
.then(
data => {
useData(data)
},
err => {
if (err.name === 'AbortError') {
// request was aborted
}
},
)
.finally(() => {
clearTimeout(timeout);
});
```
See [test cases](https://github.com/bitinn/node-fetch/blob/master/test/test.js) for more examples.
## API
### fetch(url[, options])
- `url` A string representing the URL for fetching
- `options` [Options](#fetch-options) for the HTTP(S) request
- Returns: <code>Promise&lt;[Response](#class-response)&gt;</code>
Perform an HTTP(S) fetch.
`url` should be an absolute url, such as `https://example.com/`. A path-relative URL (`/file/under/root`) or protocol-relative URL (`//can-be-http-or-https.com/`) will result in a rejected `Promise`.
<a id="fetch-options"></a>
### Options
The default values are shown after each option key.
```js
{
// These properties are part of the Fetch Standard
method: 'GET',
headers: {}, // request headers. format is the identical to that accepted by the Headers constructor (see below)
body: null, // request body. can be null, a string, a Buffer, a Blob, or a Node.js Readable stream
redirect: 'follow', // set to `manual` to extract redirect headers, `error` to reject redirect
signal: null, // pass an instance of AbortSignal to optionally abort requests
// The following properties are node-fetch extensions
follow: 20, // maximum redirect count. 0 to not follow redirect
timeout: 0, // req/res timeout in ms, it resets on redirect. 0 to disable (OS limit applies). Signal is recommended instead.
compress: true, // support gzip/deflate content encoding. false to disable
size: 0, // maximum response body size in bytes. 0 to disable
agent: null // http(s).Agent instance or function that returns an instance (see below)
}
```
##### Default Headers
If no values are set, the following request headers will be sent automatically:
Header | Value
------------------- | --------------------------------------------------------
`Accept-Encoding` | `gzip,deflate` _(when `options.compress === true`)_
`Accept` | `*/*`
`Connection` | `close` _(when no `options.agent` is present)_
`Content-Length` | _(automatically calculated, if possible)_
`Transfer-Encoding` | `chunked` _(when `req.body` is a stream)_
`User-Agent` | `node-fetch/1.0 (+https://github.com/bitinn/node-fetch)`
Note: when `body` is a `Stream`, `Content-Length` is not set automatically.
##### Custom Agent
The `agent` option allows you to specify networking related options which are out of the scope of Fetch, including and not limited to the following:
- Support self-signed certificate
- Use only IPv4 or IPv6
- Custom DNS Lookup
See [`http.Agent`](https://nodejs.org/api/http.html#http_new_agent_options) for more information.
In addition, the `agent` option accepts a function that returns `http`(s)`.Agent` instance given current [URL](https://nodejs.org/api/url.html), this is useful during a redirection chain across HTTP and HTTPS protocol.
```js
const httpAgent = new http.Agent({
keepAlive: true
});
const httpsAgent = new https.Agent({
keepAlive: true
});
const options = {
agent: function (_parsedURL) {
if (_parsedURL.protocol == 'http:') {
return httpAgent;
} else {
return httpsAgent;
}
}
}
```
<a id="class-request"></a>
### Class: Request
An HTTP(S) request containing information about URL, method, headers, and the body. This class implements the [Body](#iface-body) interface.
Due to the nature of Node.js, the following properties are not implemented at this moment:
- `type`
- `destination`
- `referrer`
- `referrerPolicy`
- `mode`
- `credentials`
- `cache`
- `integrity`
- `keepalive`
The following node-fetch extension properties are provided:
- `follow`
- `compress`
- `counter`
- `agent`
See [options](#fetch-options) for exact meaning of these extensions.
#### new Request(input[, options])
<small>*(spec-compliant)*</small>
- `input` A string representing a URL, or another `Request` (which will be cloned)
- `options` [Options][#fetch-options] for the HTTP(S) request
Constructs a new `Request` object. The constructor is identical to that in the [browser](https://developer.mozilla.org/en-US/docs/Web/API/Request/Request).
In most cases, directly `fetch(url, options)` is simpler than creating a `Request` object.
<a id="class-response"></a>
### Class: Response
An HTTP(S) response. This class implements the [Body](#iface-body) interface.
The following properties are not implemented in node-fetch at this moment:
- `Response.error()`
- `Response.redirect()`
- `type`
- `trailer`
#### new Response([body[, options]])
<small>*(spec-compliant)*</small>
- `body` A `String` or [`Readable` stream][node-readable]
- `options` A [`ResponseInit`][response-init] options dictionary
Constructs a new `Response` object. The constructor is identical to that in the [browser](https://developer.mozilla.org/en-US/docs/Web/API/Response/Response).
Because Node.js does not implement service workers (for which this class was designed), one rarely has to construct a `Response` directly.
#### response.ok
<small>*(spec-compliant)*</small>
Convenience property representing if the request ended normally. Will evaluate to true if the response status was greater than or equal to 200 but smaller than 300.
#### response.redirected
<small>*(spec-compliant)*</small>
Convenience property representing if the request has been redirected at least once. Will evaluate to true if the internal redirect counter is greater than 0.
<a id="class-headers"></a>
### Class: Headers
This class allows manipulating and iterating over a set of HTTP headers. All methods specified in the [Fetch Standard][whatwg-fetch] are implemented.
#### new Headers([init])
<small>*(spec-compliant)*</small>
- `init` Optional argument to pre-fill the `Headers` object
Construct a new `Headers` object. `init` can be either `null`, a `Headers` object, an key-value map object or any iterable object.
```js
// Example adapted from https://fetch.spec.whatwg.org/#example-headers-class
const meta = {
'Content-Type': 'text/xml',
'Breaking-Bad': '<3'
};
const headers = new Headers(meta);
// The above is equivalent to
const meta = [
[ 'Content-Type', 'text/xml' ],
[ 'Breaking-Bad', '<3' ]
];
const headers = new Headers(meta);
// You can in fact use any iterable objects, like a Map or even another Headers
const meta = new Map();
meta.set('Content-Type', 'text/xml');
meta.set('Breaking-Bad', '<3');
const headers = new Headers(meta);
const copyOfHeaders = new Headers(headers);
```
<a id="iface-body"></a>
### Interface: Body
`Body` is an abstract interface with methods that are applicable to both `Request` and `Response` classes.
The following methods are not yet implemented in node-fetch at this moment:
- `formData()`
#### body.body
<small>*(deviation from spec)*</small>
* Node.js [`Readable` stream][node-readable]
Data are encapsulated in the `Body` object. Note that while the [Fetch Standard][whatwg-fetch] requires the property to always be a WHATWG `ReadableStream`, in node-fetch it is a Node.js [`Readable` stream][node-readable].
#### body.bodyUsed
<small>*(spec-compliant)*</small>
* `Boolean`
A boolean property for if this body has been consumed. Per the specs, a consumed body cannot be used again.
#### body.arrayBuffer()
#### body.blob()
#### body.json()
#### body.text()
<small>*(spec-compliant)*</small>
* Returns: <code>Promise</code>
Consume the body and return a promise that will resolve to one of these formats.
#### body.buffer()
<small>*(node-fetch extension)*</small>
* Returns: <code>Promise&lt;Buffer&gt;</code>
Consume the body and return a promise that will resolve to a Buffer.
#### body.textConverted()
<small>*(node-fetch extension)*</small>
* Returns: <code>Promise&lt;String&gt;</code>
Identical to `body.text()`, except instead of always converting to UTF-8, encoding sniffing will be performed and text converted to UTF-8 if possible.
(This API requires an optional dependency of the npm package [encoding](https://www.npmjs.com/package/encoding), which you need to install manually. `webpack` users may see [a warning message](https://github.com/bitinn/node-fetch/issues/412#issuecomment-379007792) due to this optional dependency.)
<a id="class-fetcherror"></a>
### Class: FetchError
<small>*(node-fetch extension)*</small>
An operational error in the fetching process. See [ERROR-HANDLING.md][] for more info.
<a id="class-aborterror"></a>
### Class: AbortError
<small>*(node-fetch extension)*</small>
An Error thrown when the request is aborted in response to an `AbortSignal`'s `abort` event. It has a `name` property of `AbortError`. See [ERROR-HANDLING.MD][] for more info.
## Acknowledgement
Thanks to [github/fetch](https://github.com/github/fetch) for providing a solid implementation reference.
`node-fetch` v1 was maintained by [@bitinn](https://github.com/bitinn); v2 was maintained by [@TimothyGu](https://github.com/timothygu), [@bitinn](https://github.com/bitinn) and [@jimmywarting](https://github.com/jimmywarting); v2 readme is written by [@jkantr](https://github.com/jkantr).
## License
MIT
[npm-image]: https://flat.badgen.net/npm/v/node-fetch
[npm-url]: https://www.npmjs.com/package/node-fetch
[travis-image]: https://flat.badgen.net/travis/bitinn/node-fetch
[travis-url]: https://travis-ci.org/bitinn/node-fetch
[codecov-image]: https://flat.badgen.net/codecov/c/github/bitinn/node-fetch/master
[codecov-url]: https://codecov.io/gh/bitinn/node-fetch
[install-size-image]: https://flat.badgen.net/packagephobia/install/node-fetch
[install-size-url]: https://packagephobia.now.sh/result?p=node-fetch
[discord-image]: https://img.shields.io/discord/619915844268326952?color=%237289DA&label=Discord&style=flat-square
[discord-url]: https://discord.gg/Zxbndcm
[opencollective-image]: https://opencollective.com/node-fetch/backers.svg
[opencollective-url]: https://opencollective.com/node-fetch
[whatwg-fetch]: https://fetch.spec.whatwg.org/
[response-init]: https://fetch.spec.whatwg.org/#responseinit
[node-readable]: https://nodejs.org/api/stream.html#stream_readable_streams
[mdn-headers]: https://developer.mozilla.org/en-US/docs/Web/API/Headers
[LIMITS.md]: https://github.com/bitinn/node-fetch/blob/master/LIMITS.md
[ERROR-HANDLING.md]: https://github.com/bitinn/node-fetch/blob/master/ERROR-HANDLING.md
[UPGRADE-GUIDE.md]: https://github.com/bitinn/node-fetch/blob/master/UPGRADE-GUIDE.md

View File

@@ -1,25 +0,0 @@
"use strict";
// ref: https://github.com/tc39/proposal-global
var getGlobal = function () {
// the only reliable means to get the global object is
// `Function('return this')()`
// However, this causes CSP violations in Chrome apps.
if (typeof self !== 'undefined') { return self; }
if (typeof window !== 'undefined') { return window; }
if (typeof global !== 'undefined') { return global; }
throw new Error('unable to locate global object');
}
var global = getGlobal();
module.exports = exports = global.fetch;
// Needed for TypeScript and Webpack.
if (global.fetch) {
exports.default = global.fetch.bind(global);
}
exports.Headers = global.Headers;
exports.Request = global.Request;
exports.Response = global.Response;

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

File diff suppressed because it is too large Load Diff

View File

@@ -1,76 +0,0 @@
{
"name": "node-fetch",
"version": "2.6.7",
"description": "A light-weight module that brings window.fetch to node.js",
"main": "lib/index.js",
"browser": "./browser.js",
"module": "lib/index.mjs",
"files": [
"lib/index.js",
"lib/index.mjs",
"lib/index.es.js",
"browser.js"
],
"engines": {
"node": "4.x || >=6.0.0"
},
"scripts": {
"build": "cross-env BABEL_ENV=rollup rollup -c",
"prepare": "npm run build",
"test": "cross-env BABEL_ENV=test mocha --require babel-register --throw-deprecation test/test.js",
"report": "cross-env BABEL_ENV=coverage nyc --reporter lcov --reporter text mocha -R spec test/test.js",
"coverage": "cross-env BABEL_ENV=coverage nyc --reporter json --reporter text mocha -R spec test/test.js && codecov -f coverage/coverage-final.json"
},
"repository": {
"type": "git",
"url": "https://github.com/bitinn/node-fetch.git"
},
"keywords": [
"fetch",
"http",
"promise"
],
"author": "David Frank",
"license": "MIT",
"bugs": {
"url": "https://github.com/bitinn/node-fetch/issues"
},
"homepage": "https://github.com/bitinn/node-fetch",
"dependencies": {
"whatwg-url": "^5.0.0"
},
"peerDependencies": {
"encoding": "^0.1.0"
},
"peerDependenciesMeta": {
"encoding": {
"optional": true
}
},
"devDependencies": {
"@ungap/url-search-params": "^0.1.2",
"abort-controller": "^1.1.0",
"abortcontroller-polyfill": "^1.3.0",
"babel-core": "^6.26.3",
"babel-plugin-istanbul": "^4.1.6",
"babel-preset-env": "^1.6.1",
"babel-register": "^6.16.3",
"chai": "^3.5.0",
"chai-as-promised": "^7.1.1",
"chai-iterator": "^1.1.1",
"chai-string": "~1.3.0",
"codecov": "3.3.0",
"cross-env": "^5.2.0",
"form-data": "^2.3.3",
"is-builtin-module": "^1.0.0",
"mocha": "^5.0.0",
"nyc": "11.9.0",
"parted": "^0.1.1",
"promise": "^8.0.3",
"resumer": "0.0.0",
"rollup": "^0.63.4",
"rollup-plugin-babel": "^3.0.7",
"string-to-arraybuffer": "^1.0.2",
"teeny-request": "3.7.0"
}
}

View File

@@ -1,29 +0,0 @@
{
"name": "@devnote-dev/pterojs",
"version": "1.4.2",
"description": "A flexibile API wrapper for Pterodactyl",
"main": "src/index.js",
"scripts": {
"tests": "node tests/index"
},
"repository": {
"type": "git",
"url": "git+https://github.com/PteroPackages/PteroJS.git"
},
"keywords": [
"pterodactyl",
"pterodactyl-api",
"wrapper",
"javascript"
],
"author": "Devonte <https://github.com/devnote-dev>",
"license": "MIT",
"bugs": {
"url": "https://github.com/PteroPackages/PteroJS/issues"
},
"homepage": "https://github.com/PteroPackages/PteroJS#readme",
"dependencies": {
"node-fetch": "^2.6.7",
"ws": "^8.5.0"
}
}

View File

@@ -1,44 +0,0 @@
{
"application":{
"users":{
"fetch": true,
"cache": true,
"max": -1
},
"nodes":{
"fetch": true,
"cache": true,
"max": -1
},
"nests":{
"fetch": false,
"cache": false,
"max": 0
},
"servers":{
"fetch": false,
"cache": true,
"max": 10
},
"locations":{
"fetch": false,
"cache": false,
"max": 0
}
},
"client":{
"ws": true,
"fetchClient": true,
"servers":{
"fetch": true,
"cache": true,
"max": -1
},
"subUsers":{
"fetch": false,
"cache": false,
"max": 0
},
"disableEvents":["debug"]
}
}

View File

@@ -1,229 +0,0 @@
const ApplicationServer = require('../structures/ApplicationServer');
const Dict = require('../structures/Dict');
const { PteroUser } = require('../structures/User');
const build = require('../util/query');
const endpoints = require('./endpoints');
class ApplicationServerManager {
/**
* Allowed filter arguments for application servers.
*/
static get FILTERS() {
return Object.freeze([
'name', 'uuid', 'uuidShort',
'externalId', 'image'
]);
}
/**
* Allowed include arguments for application servers.
*/
static get INCLUDES() {
return Object.freeze([
'allocations', 'user', 'subusers',
'nest', 'egg', 'variables',
'location', 'node', 'databases'
]);
}
/**
* Allowed sort arguments for application servers.
*/
static get SORTS() {
return Object.freeze(['id', '-id', 'uuid', '-uuid']);
}
constructor(client) {
this.client = client;
this.cache = new Dict();
}
get defaultLimits() {
return {
memory: 128,
swap: 0,
disk: 512,
io: 500,
cpu: 100
}
}
get defaultFeatureLimits() {
return {
databases: 5,
backups: 1
}
}
_patch(data) {
if (data?.data) {
const res = new Dict();
for (let o of data.data) {
o = o.attributes;
const s = new ApplicationServer(this.client, o);
res.set(s.id, s);
}
if (this.client.options.servers.cache) res.forEach((v, k) => this.cache.set(k, v));
return res;
}
const s = new ApplicationServer(this.client, data.attributes);
if (this.client.options.servers.cache) this.cache.set(s.id, s);
return s;
}
/**
* Resolves a server from an object. This can be:
* * a string
* * a number
* * an object
*
* Returns `undefined` if not found.
* @param {string|number|object|ApplicationServer} obj The object to resolve from.
* @returns {?ApplicationServer} The resolved server.
*/
resolve(obj) {
if (obj instanceof ApplicationServer) return obj;
if (typeof obj === 'number') return this.cache.get(obj);
if (typeof obj === 'string') return this.cache.find(s => s.name === obj);
if (obj.relationships?.servers) return this._patch(obj.relationships.servers);
return undefined;
}
/**
* Returns a formatted URL to the server.
* @param {string|ApplicationServer} server The server or server identifier.
* @returns {string} The formatted URL.
*/
panelURLFor(server) {
if (server instanceof ApplicationServer) return server.panelURL;
return `${this.client.domain}/server/${server}`;
}
/**
* Returns a formatted URL to the server in the admin panel.
* @param {number|ApplicationServer} server The server or server ID.
* @returns {string} The formatted URL.
*/
adminURLFor(server) {
if (server instanceof ApplicationServer) return server.adminURL;
return `${this.client.domain}/admin/servers/view/${server}`;
}
/**
* Fetches a server from the Pterodactyl API with an optional cache check.
* @param {number} [id] The ID of the server.
* @param {object} [options] Additional fetch options.
* @param {boolean} [options.force] Whether to skip checking the cache and fetch directly.
* @param {string[]} [options.include] Additional fetch parameters to include.
* @returns {Promise<ApplicationServer|Dict<number, ApplicationServer>>} The fetched server(s).
*/
async fetch(id, options = {}) {
if (id && !options.force) {
const s = this.cache.get(id);
if (s) return Promise.resolve(s);
}
const query = build(options, { include: ApplicationServerManager.INCLUDES });
const data = await this.client.requests.get(
(id ? endpoints.servers.get(id) : endpoints.servers.main) + query
);
return this._patch(data);
}
/**
* Queries the API for a server (or servers) that match the specified query filter.
* Keep in mind this does NOT check the cache first, it will fetch from the API directly.
* Available query filters are:
* * name
* * uuid
* * uuidShort
* * identifier (alias for uuidShort)
* * externalId
* * image
*
* Available sort options are:
* * id
* * -id
* * uuid
* * -uuid
*
* @param {string} entity The entity (string) to query.
* @param {string} filter The filter to use for the query.
* @param {string} sort The order to sort the results in.
* @returns {Promise<Dict<number, ApplicationServer>>} A dict of the quiried servers.
*/
async query(entity, filter, sort) {
if (!sort && !filter) throw new Error('Sort or filter is required.');
if (filter === 'identifier') filter = 'uuidShort';
if (filter === 'externalId') filter = 'external_id';
const { FILTERS, SORTS } = ApplicationServerManager;
const query = build(
{ filter:[filter, entity], sort },
{ filters: FILTERS, sorts: SORTS }
);
const data = await this.client.requests.get(
endpoints.servers.main + query
);
return this._patch(data);
}
/**
* Creates a new Pterodactyl server for a specified user.
* @param {number|PteroUser} user The user to create the server for.
* @param {object} options Base server options.
* @param {string} options.name The name of the server.
* @param {number} options.egg The egg for the server.
* @param {string} options.image The docker image for the server.
* @param {string} options.startup The startup command for the server.
* @param {object} options.env Server environment options.
* @param {number} options.allocation The allocation for the server.
* @param {object} [options.limits] Resource limits for the server.
* @param {object} [options.featureLimits] Feature limits for the server.
* @returns {Promise<ApplicationServer>} The new server.
*/
async create(user, options = {}) {
if (
!options.name ||
!options.egg ||
!options.image ||
!options.startup ||
!options.env
) throw new Error('Missing required server option.');
if (user instanceof PteroUser) user = user.id;
const payload = {};
payload.user = user;
payload.name = options.name;
payload.egg = options.egg;
payload.startup = options.startup;
payload.docker_image = options.image;
payload.environment = options.env;
payload.allocation = { default: options.allocation };
payload.limits = options.limits ?? this.defaultLimits;
payload.feature_limits = options.featureLimits ?? this.defaultFeatureLimits;
await this.client.requests.post(endpoints.servers.main, payload);
const data = await this.query(payload.name, 'name', '-id');
return data.find(s => s.name === payload.name);
}
/**
* Deletes a specified server.
* @param {number|ApplicationServer} server The ID of the server.
* @param {boolean} [force] Whether to force delete the server.
* @returns {Promise<boolean>}
*/
async delete(server, force = false) {
if (server instanceof ApplicationServer) server = server.id;
await this.client.requests.delete(
endpoints.servers.get(server) + (force ? '/force' : '')
);
this.cache.delete(server);
return true;
}
}
module.exports = ApplicationServerManager;

View File

@@ -1,72 +0,0 @@
const Dict = require('../structures/Dict');
const build = require('../util/query');
const endpoints = require('./endpoints');
class NestEggsManager {
/**
* Allowed include arguments for nest eggs.
*/
static get INCLUDES() {
return Object.freeze([
'nest', 'servers', 'config',
'script', 'variables'
]);
}
constructor(client) {
this.client = client;
/** @type {Dict<number, object>} */
this.cache = new Dict();
}
/**
* Returns a formatted URL to the egg in the admin panel.
* @param {number} id The ID of the egg.
* @returns {string} The formatted URL.
*/
adminURLFor(id) {
return `${this.client.domain}/admin/nests/egg/${id}`;
}
/**
* Fetches the eggs for the specified nest.
* @param {number} nest The ID of the nest to fetch from.
* @param {number} [id] The ID of the egg.
* @param {object} [options] Additional fetch options.
* @param {boolean} [options.force] Whether to skip checking the cache and fetch directly.
* @param {string[]} [options.include] Additional fetch parameters to include.
* @returns {Promise<object|Dict<number, object>>} The fetched egg(s).
*/
async fetch(nest, id, options = {}) {
if (id && !options.force) {
const e = this.cache.get(id);
if (e) return Promise.resolve(e);
}
const query = build(options, { include: NestEggsManager.INCLUDES });
const data = await this.client.requests.get(
(id ? endpoints.nests.eggs.get(nest, id) : endpoints.nests.eggs.main(nest)) + query
);
const res = new Dict();
for (const egg of data.data) {
this.cache.set(egg.attributes.id, egg.attributes);
res.set(egg.attributes.id, egg.attributes);
}
return res;
}
/**
* Searches the cache for eggs that are for the specified nest.
* @param {number} nest The ID of the nest to search.
* @returns {object[]} The nest's eggs.
*/
for(nest) {
const res = [];
for (const [, egg] of this.cache) if (egg.nest === nest) res.push(egg);
return res;
}
}
module.exports = NestEggsManager;

View File

@@ -1,94 +0,0 @@
const NestEggsManager = require('./NestEggsManager');
const build = require('../util/query');
const endpoints = require('./endpoints');
class NestManager {
/**
* Allowed include arguments for nests.
*/
static get INCLUDES() {
return Object.freeze(['eggs', 'servers']);
}
constructor(client) {
this.client = client;
/** @type {Set<Nest>} */
this.cache = new Set();
/** @type {NestEggsManager} */
this.eggs = new NestEggsManager(this.client);
}
_patch(data) {
const res = new Set();
if (data.data) {
for (let o of data.data) {
o = o.attributes;
res.add({
id: o.id,
uuid: o.uuid,
author: o.author,
name: o.name,
description: o.description,
createdAt: new Date(o.created_at),
updatedAt: o.updated_at ? new Date(o.updated_at) : null
});
}
if (this.client.options.nests.cache) res.forEach(n => this.cache.add(n));
return res;
}
data = data.attributes;
res.add({
id: data.id,
uuid: data.uuid,
author: data.author,
name: data.name,
description: data.description,
createdAt: new Date(data.created_at),
updatedAt: data.updated_at ? new Date(data.updated_at) : null
});
if (this.client.options.nests.cache) res.forEach(n => this.cache.add(n));
return res;
}
/**
* Returns a formatted URL to the nest in the admin panel.
* @param {number} id The ID of the nest.
* @returns {string} The formatted URL.
*/
adminURLFor(id) {
return `${this.client.domain}/admin/nests/view/${id}`;
}
/**
* Fetches a nest from the Pterodactyl API.
* @param {number} [id] The ID of the nest.
* @param {string[]} [include] Additional data to include about the nest.
* @returns {Promise<Set<Nest>>} The fetched nests.
*/
async fetch(id, include = []) {
const query = build({ include }, { include: NestManager.INCLUDES });
const data = await this.client.requests.get(
(id ? endpoints.nests.get(id) : endpoints.nests.main) + query
);
return this._patch(data);
}
}
module.exports = NestManager;
/**
* Represents a nest on Pterodactyl.
* @typedef {object} Nest
* @property {number} id The ID of the nest.
* @property {string} uuid The UUID of the nest.
* @property {string} author The author of the nest.
* @property {string} name The name of the nest.
* @property {string} description The description of the nest.
* @property {Date} createdAt The date the nest was created.
* @property {?Date} updatedAt The date the nest was last updated.
*/

View File

@@ -1,192 +0,0 @@
const Dict = require('../structures/Dict');
const build = require('../util/query');
const endpoints = require('./endpoints');
class NodeLocationManager {
/**
* Allowed filter arguments for locations.
*/
static get FILTERS() {
return Object.freeze(['short', 'long']);
}
/**
* Allowed include arguments for locations.
*/
static get INCLUDES() {
return Object.freeze(['nodes', 'servers']);
}
constructor(client) {
this.client = client;
/** @type {Dict<number, NodeLocation>} */
this.cache = new Dict();
}
_patch(data) {
if (data.data) {
const res = new Map();
for (let o of data.data) {
o = o.attributes;
res.set(o.id, {
id: o.id,
long: o.long,
short: o.short,
createdAt: new Date(o.created_at),
updatedAt: o.updated_at ? new Date(o.updated_at) : null
});
}
if (this.client.options.locations.cache) res.forEach((v, k) => this.cache.set(k, v));
return res;
}
data = data.attributes;
const loc = {
id: data.id,
long: data.long,
short: data.short,
createdAt: new Date(data.created_at),
updatedAt: data.updated_at ? new Date(data.updated_at) : null
}
if (this.client.options.locations.cache) this.cache.set(data.id, loc);
return loc;
}
/**
* Resolves a node location from an object. This can be:
* * a number
* * a string
* * an object
*
* Returns `undefined` if not found.
* @param {string|number|object} obj The object to resolve from.
* @returns {?NodeLocation} The resolved node location.
*/
resolve(obj) {
if (typeof obj === 'number') return this.cache.get(obj);
if (typeof obj === 'string') return this.cache.find(
o => (o.short === obj) || (o.long === obj)
);
if (obj.relationships?.location?.attributes)
return this._patch(obj.relationships.location);
return undefined;
}
/**
* Returns a formatted URL to the node location in the admin panel.
* @param {number} id The ID of the node location.
* @returns {string} The formatted URL.
*/
adminURLFor(id) {
return `${this.client.domain}/admin/locations/view/${id}`;
}
/**
* Fetches a node location from the Pterodactyl API with an optional cache check.
* @param {number} [id] The ID of the location.
* @param {object} [options] Additional fetch options.
* @param {boolean} [options.force] Whether to skip checking the cache and fetch directly.
* @param {string[]} [options.include] Additional data to include about the location.
* @returns {Promise<NodeLocation|Dict<number, NodeLocation>>} The fetched node location(s).
*/
async fetch(id, options = {}) {
if (id && !options.force) {
const loc = this.cache.get(id);
if (loc) return Promise.resolve(loc);
}
const query = build(options, { include: NodeLocationManager.INCLUDES });
const data = await this.client.requests.get(
(id ? endpoints.locations.get(id) : endpoints.locations.main) + query
);
return this._patch(data);
}
/**
* Queries the API for a location (or locations) that match the specified query filter/sort.
* This does NOT check the cache first, it is a direct fetch from the API.
* Available filters:
* * short
* * long
*
* Available sort options:
* * id
* * -id
*
* @param {string} entity The entity to query.
* @param {string} [filter] The filter to use for the query.
* @param {string} [sort] The order to sort the results in.
* @returns {Promise<Dict<number, NodeLocation>>} A dict of the queried locations.
*/
async query(entity, filter, sort) {
if (!sort && !filter) throw new Error('Sort or filter is required.');
const query = build(
{ filter:[filter, entity], sort },
{ filters: NodeLocationManager.FILTERS, sorts:['id'] }
);
const data = await this.client.requests.get(
endpoints.locations.main + query
);
return this._patch(data);
}
/**
* Creates a new node location.
* @param {string} short The short location code of the location.
* @param {string} long The long location code of the location.
* @returns {Promise<NodeLocation>} The new node location.
*/
async create(short, long) {
return this._patch(
await this.client.requests.post(
endpoints.locations.main,
{ short, long }
)
);
}
/**
* Updates an existing node location.
* @param {number} id The ID of the node location.
* @param {object} options Location update optioons.
* @param {string} [options.short] The short location code of the location.
* @param {string} [options.long] The long location code of the location.
* @returns {Promise<NodeLocation>} The updated node location instance.
*/
async update(id, options) {
if (!options.short && !options.long)
throw new Error('Either short or long option is required.');
return this._patch(
await this.client.requests.patch(endpoints.locations.get(id), options)
);
}
/**
* Deletes a node location.
* @param {number} id The ID of the node location.
* @returns {Promise<boolean>}
*/
async delete(id) {
await this.client.requests.delete(endpoints.locations.get(id));
this.cache.delete(id);
return true;
}
}
module.exports = NodeLocationManager;
/**
* Represents a location on Pterodactyl.
* @typedef {object} NodeLocation
* @property {number} id The ID of the location.
* @property {string} long The long location code.
* @property {string} short The short location code (or country code).
* @property {Date} createdAt The date the location was created.
* @property {?Date} updatedAt The date the location was last updated.
*/

View File

@@ -1,233 +0,0 @@
const Node = require('../structures/Node');
const Dict = require('../structures/Dict');
const build = require('../util/query');
const endpoints = require('./endpoints');
class NodeManager {
/**
* Allowed filter arguments for nodes.
*/
static get FILTERS() {
return Object.freeze(['uuid', 'name', 'fqdn', 'daemon_token_id']);
}
/**
* Allowed include arguments for nodes.
*/
static get INCLUDES() {
return Object.freeze(['allocations', 'location', 'servers']);
}
/**
* Allowed sort arguments for nodes.
*/
static get SORTS() {
return Object.freeze(['id', 'uuid', 'memory', 'disk']);
}
constructor(client) {
this.client = client;
/** @type {Dict<number, Node>} */
this.cache = new Dict();
}
_patch(data) {
if (data.data) {
const res = new Dict();
for (const o of data.data) {
const n = new Node(this.client, o);
res.set(n.id, n);
}
if (this.client.options.nodes.cache) res.forEach((v, k) => this.cache.set(k, v));
return res;
}
const n = new Node(this.client, data);
if (this.client.options.nodes.cache) this.cache.set(n.id, n);
return n;
}
/**
* Resolves a node from an object. This can be:
* * a string
* * a number
* * an object
*
* Returns `undefined` if not found.
* @param {string|number|object|Node} obj The object to resolve from.
* @returns {?Node} The resolved node.
*/
resolve(obj) {
if (obj instanceof Node) return obj;
if (typeof obj === 'number') return this.cache.get(obj);
if (typeof obj === 'string') return this.cache.find(n => n.name === obj);
if (obj.relationships?.node) return this._patch(obj.relationships.node);
return undefined;
}
/**
* Returns a formatted URL to the node in the admin panel.
* @param {number|Node} node The node or ID of the node.
* @returns {string} The formatted URL.
*/
adminURLFor(node) {
if (node instanceof Node) return node.adminURL;
return `${this.client.domain}/admin/nodes/view/${node}`;
}
/**
* Fetches a node from the Pterodactyl API with an optional cache check.
* @param {number} [id] The ID of the node.
* @param {object} [options] Additional fetch options.
* @param {boolean} [options.force] Whether to skip checking the cache and fetch directly.
* @param {string[]} [options.include] Additional data to include about the node.
* @returns {Promise<Node|Dict<number, Node>>} The fetched node(s).
*/
async fetch(id, options = {}) {
if (id && !options.force) {
const n = this.cache.get(id);
if (n) return Promise.resolve(s);
}
const query = build(options, { include: NodeManager.INCLUDES });
const data = await this.client.requests.get(
(id ? endpoints.nodes.get(id) : endpoints.nodes.main) + query
);
return this._patch(data);
}
/**
* Queries the API for a node (or nodes) that match the specified query filter/sort.
* This does NOT check the cache first, it is a direct fetch from the API.
* Available filters:
* * uuid
* * name
* * fqdn
* * daemonTokenId
*
* Available sort options:
* * id
* * -id
* * uuid
* * -uuid
* * memory
* * -memory
* * disk
* * -disk
*
* @param {string} entity The entity to query.
* @param {string} filter The filter to use for the query.
* @param {string} sort The order to sort the results in.
* @returns {Promise<Dict<number, Node>>} A dict of the quiried nodes.
*/
async query(entity, filter, sort) {
if (!sort && !filter) throw new Error('Sort or filter is required.');
if (filter === 'daemonTokenId') filter = 'daemon_token_id';
const { FILTERS, SORTS } = NodeManager;
const query = build(
{ filter:[filter, entity], sort },
{ filters: FILTERS, sorts: SORTS }
);
const data = await this.client.requests.get(endpoints.nodes.main + query);
return this._patch(data);
}
/**
* Creates a new Pterodactyl server node.
* @param {object} options Node creation options.
* @param {string} options.name The name of the node.
* @param {number} options.location The ID of the location for the node.
* @param {string} options.fqdn The FQDN for the node.
* @param {string} options.scheme The HTTP/HTTPS scheme for the node.
* @param {number} options.memory The amount of memory for the node.
* @param {number} options.disk The amount of disk for the node.
* @param {object} options.sftp SFTP options.
* @param {number} options.sftp.port The port for the SFTP.
* @param {number} options.sftp.listener The listener port for the SFTP.
* @param {number} [options.upload_size] The maximum upload size for the node.
* @param {number} [options.memory_overallocate] The amount of memory over allocation.
* @param {number} [options.disk_overallocate] The amount of disk over allocation.
* @returns {Promise<Node>} The new node.
*/
async create(options = {}) {
if (
!options.name ||
!options.location ||
!options.fqdn ||
!options.scheme ||
!options.memory ||
!options.disk ||
!options.sftp?.port ||
!options.sftp?.listener
) throw new Error('Missing required Node creation option.');
const payload = {};
payload.name = options.name;
payload.location = options.location;
payload.fqdn = options.fqdn;
payload.scheme = options.scheme;
payload.memory = options.memory;
payload.disk = options.disk;
payload.sftp = options.sftp;
payload.upload_size = options.upload_size ?? 100;
payload.memory_overallocate = options.memory_overallocate ?? 0;
payload.disk_overallocate = options.disk_overallocate ?? 0;
const data = await this.client.requests.post(
endpoints.nodes.main, payload
);
return this._patch(data);
}
/**
* Updates a specified node.
* @param {number|Node} node The node to update.
* @param {object} options Node update options.
* @param {string} [options.name] The name of the node.
* @param {number} [options.location] The ID of the location for the node.
* @param {string} [options.fqdn] The FQDN for the node.
* @param {string} [options.scheme] The HTTP/HTTPS scheme for the node.
* @param {number} [options.memory] The amount of memory for the node.
* @param {number} [options.disk] The amount of disk for the node.
* @param {object} [options.sftp] SFTP options.
* @param {number} [options.sftp.port] The port for the SFTP.
* @param {number} [options.sftp.listener] The listener port for the SFTP.
* @param {number} [options.upload_size] The maximum upload size for the node.
* @param {number} [options.memory_overallocate] The amount of memory over allocation.
* @param {number} [options.disk_overallocate] The amount of disk over allocation.
* @returns {Promise<Node>} The updated node instance.
*/
async update(node, options = {}) {
if (typeof node === 'number') node = await this.fetch(node);
if (!Object.keys(options).length) throw new Error('Too few options to update.');
const { id } = node;
const payload = {};
Object.entries(node.toJSON()).forEach(e => payload[e[0]] = options[e[0]] ?? e[1]);
payload.memory_overallocate = payload.overallocated_memory;
payload.disk_overallocate = payload.overallocated_disk;
const data = await this.client.requests.patch(
endpoints.nodes.get(id), payload
);
return this._patch(data);
}
/**
* Deletes a node from Pterodactyl.
* @param {number|Node} node The node to delete.
* @returns {Promise<boolean>}
*/
async delete(node) {
if (node instanceof Node) node = node.id;
await this.client.requests.delete(endpoints.nodes.get(node));
this.cache.delete(node);
return true;
}
}
module.exports = NodeManager;

View File

@@ -1,113 +0,0 @@
const ApplicationServerManager = require('./ApplicationServerManager');
const NestManager = require('./NestManager');
const NodeAllocationManager = require('./NodeAllocationManager');
const NodeLocationManager = require('./NodeLocationManager');
const NodeManager = require('./NodeManager');
const UserManager = require('./UserManager');
const RequestManager = require('../http/RequestManager');
const loader = require('../util/configLoader');
/**
* The base class for the Pterodactyl application API.
* This operates using a Pterodactyl application API key which can be found
* at <your.domain.name/admin/api>.
*
* **Warning:** Keep your API key private at all times. Exposing this can lead
* to your servers, nodes, configurations and more being corrupted and/or deleted.
*/
class PteroApp {
/**
* @param {string} domain The Pterodactyl domain.
* @param {string} auth The authentication key for Pterodactyl.
* @param {ApplicationOptions} [options] Additional application options.
*/
constructor(domain, auth, options = {}) {
if (!/https?\:\/\/(?:localhost\:\d{4}|[\w\.\-]{3,256})/gi.test(domain))
throw new SyntaxError(
"Domain URL must start with 'http://' or 'https://' and "+
'must be bound to a port if using localhost.'
);
/**
* The domain for your Pterodactyl panel. This should be the main URL only
* (not "/api"). Any additional paths will count as the API path.
* @type {string}
*/
this.domain = domain.endsWith('/') ? domain.slice(0, -1) : domain;
/**
* The API key for your Pterodactyl panel. This should be kept private at
* all times. Full access must be granted in the panel for the whole library
* to be accessible.
* @type {string}
*/
this.auth = auth;
/**
* Additional startup options for the application (optional).
* @type {ApplicationOptions}
*/
this.options = loader.appConfig(options);
/** @type {UserManager} */
this.users = new UserManager(this);
/** @type {NodeManager} */
this.nodes = new NodeManager(this);
/** @type {NestManager} */
this.nests = new NestManager(this);
/** @type {ApplicationServerManager} */
this.servers = new ApplicationServerManager(this);
/** @type {NodeLocationManager} */
this.locations = new NodeLocationManager(this);
/** @type {NodeAllocationManager} */
this.allocations = new NodeAllocationManager(this);
/** @type {RequestManager} @internal */
this.requests = new RequestManager('application', domain, auth);
}
/**
* Used for performing preload requests to Pterodactyl.
* @returns {Promise<boolean>}
*/
async connect() {
if (this.options.users.fetch && this.options.users.cache) await this.users.fetch();
if (this.options.nodes.fetch && this.options.nodes.cache) await this.nodes.fetch();
if (this.options.nests.fetch && this.options.nests.cache) await this.nests.fetch();
if (this.options.servers.fetch && this.options.servers.cache) await this.servers.fetch();
if (this.options.locations.fetch && this.options.locations.cache)
await this.locations.fetch();
return true;
}
get ping() {
return this.requests._ping;
}
}
module.exports = PteroApp;
/**
* @typedef {object} OptionSpec
* @property {boolean} fetch
* @property {boolean} cache
* @property {number} max
*/
/**
* Startup options for the application API.
* By default, all fetch options are `false`, and all cache options are `true`.
* Enabling fetch and disabling cache for the same class will cancel out the request.
* @typedef {object} ApplicationOptions
* @property {OptionSpec} [users] Options for fetching and caching users.
* @property {OptionSpec} [nodes] Options for fetching and caching nodes.
* @property {OptionSpec} [nests] Options for fetching and caching nests.
* @property {OptionSpec} [servers] Options for fetching and caching servers.
* @property {OptionSpec} [locations] Options for fetching and caching node locations.
*/

View File

@@ -1,225 +0,0 @@
const { PteroUser } = require('../structures/User');
const Dict = require('../structures/Dict');
const build = require('../util/query');
const endpoints = require('./endpoints');
class UserManager {
/**
* Allowed filter arguments for users.
*/
static get FILTERS() {
return Object.freeze([
'email', 'uuid', 'uuidShort',
'username', 'image', 'external_id'
]);
}
/**
* Allowed sort arguments for users.
*/
static get SORTS() {
return Object.freeze(['id', '-id', 'uuid', '-uuid']);
}
constructor(client) {
this.client = client;
/** @type {Dict<number, PteroUser>} */
this.cache = new Dict();
}
_patch(data) {
if (data.data) {
const res = new Dict();
for (let o of data.data) {
o = o.attributes;
const u = new PteroUser(this.client, o);
res.set(u.id, u);
}
if (this.client.options.users.cache) res.forEach((v, k) => this.cache.set(k, v));
return res;
}
const u = new PteroUser(this.client, data.attributes);
if (this.client.options.users.cache) this.cache.set(u.id, u);
return u;
}
/**
* Resolves a user from an object. This can be:
* * a string
* * a number
* * an object
*
* Returns `undefined` if not found.
* @param {string|number|object|PteroUser} obj The object to resolve from.
* @returns {?PteroUser} The resolved user.
*/
resolve(obj) {
if (obj instanceof PteroUser) return obj;
if (typeof obj === 'number') return this.cache.get(obj);
if (typeof obj === 'string') return this.cache.find(s => s.name === obj);
if (obj.relationships?.user) return this._patch(obj.relationships.user);
return undefined;
}
/**
* Returns a formatted URL to the user in the admin panel.
* @param {number|PteroUser} user The user or ID of the user.
* @returns {string} The formatted URL.
*/
adminURLFor(user) {
if (user instanceof PteroUser) return user.adminURL;
return `${this.client.domain}/admin/users/view/${user}`;
}
/**
* Fetches a user from the Pterodactyl API with an optional cache check.
* @param {number} [id] The ID of the user.
* @param {object} [options] Additional fetch options.
* @param {boolean} [options.force] Whether to skip checking the cache and fetch directly.
* @param {boolean} [options.withServers] Whether to include servers the user(s) own.
* @returns {Promise<PteroUser|Dict<number, PteroUser>>} The fetched user(s).
*/
async fetch(id, options = {}) {
if (id && !options.force) {
const u = this.cache.get(id);
if (u) return Promise.resolve(u);
}
const data = await this.client.requests.get(
(id ? endpoints.users.get(id) : endpoints.users.main) +
(options.withServers ? '?include=servers' : '')
);
return this._patch(data);
}
/**
* Fetches a user by their external ID with an optional cache check.
* @param {number} id The ID of the external user.
* @param {object} [options] Additional fetch options.
* @param {boolean} [options.force] Whether to skip checking the cache and fetch directly.
* @param {boolean} [options.withServers] Whether to include servers the user has.
* @returns {Promise<PteroUser>} The fetched user.
*/
async fetchExternal(id, options = {}) {
if (!options.force) {
const u = this.cache.filter(u => u.externalId === id);
if (u) return Promise.resolve(u);
}
const data = await this.client.requests.get(
endpoints.users.ext(id) + (options.withServers ? '?include=servers' : '')
);
return this._patch(data);
}
/**
* Queries the API for a user (or users) that match the specified query filter.
* Keep in mind this does NOT check the cache first, it will fetch from the API directly.
* Available query filters are:
* * email
* * name
* * uuid
* * uuidShort
* * identifier (alias for uuidShort)
* * externalId
* * image
*
* Available sort options are:
* * id
* * -id
* * uuid
* * -uuid
*
* @param {string} entity The entity (string) to query.
* @param {string} [filter] The filter to use for the query.
* @param {string} [sort] The order to sort the results in.
* @returns {Promise<Dict<number, PteroUser>>} A dict of the queried users.
*/
async query(entity, filter, sort) {
if (!sort && !filter) throw new Error('Sort or filter is required.');
if (filter === 'identifier') filter = 'uuidShort';
if (filter === 'externalId') filter = 'external_id';
const { FILTERS, SORTS } = UserManager;
const query = build(
{ filter:[filter, entity], sort },
{ filters: FILTERS, sorts: SORTS }
);
const data = await this.client.requests.get(endpoints.users.main + query);
return this._patch(data);
}
/**
* Creates a new Pterodactyl user account.
* @param {string} email The email for the account.
* @param {string} username The username for the acount.
* @param {string} firstname The firstname for the account.
* @param {string} lastname The lastname for the account.
* @returns {Promise<PteroUser>} The new user.
*/
async create(email, username, firstname, lastname) {
await this.client.requests.post(
endpoints.users.main,
{ email, username, first_name: firstname, last_name: lastname }
);
const data = await this.query(email, 'email', '-id');
return data.find(u => u.email === email);
}
/**
* Updates the specified user's account.
* @param {number|PteroUser} user The user to update.
* @param {object} options Changes to update the user with.
* @param {string} [options.email] The new email for the account.
* @param {string} [options.username] The new username for the account.
* @param {string} [options.firstname] The new firstname for the account.
* @param {string} [options.lastname] The new lastname for the account.
* @param {string} [options.language] The new language for the account.
* @param {string} options.password The password for the user account.
* @returns {Promise<PteroUser>} The updated user instance.
*/
async update(user, options = {}) {
if (!options.password) throw new Error('User password is required.');
if (!Object.keys(options).length) throw new Error('Too few parameters to update.');
if (typeof user === 'number') user = await this.fetch(user);
const { password } = options;
let { id, email, username, firstname, lastname, language } = user;
if (options.email) email = options.email;
if (options.username) username = options.username;
if (options.firstname) firstname = options.firstname;
if (options.lastname) lastname = options.lastname;
if (options.language) language = options.language;
const data = await this.client.requests.patch(
endpoints.users.get(id),
{
email,
username,
first_name: firstname,
last_name: lastname,
language,
password
}
);
return this._patch(data);
}
/**
* Deletes the user account from Pterodactyl.
* @param {number|PteroUser} user The user to delete.
* @returns {Promise<boolean>}
*/
async delete(user) {
if (user instanceof PteroUser) user = user.id;
await this.client.requests.delete(endpoints.users.get(user));
this.cache.delete(user);
return true;
}
}
module.exports = UserManager;

View File

@@ -1,39 +0,0 @@
module.exports = {
users:{
main: '/api/application/users',
get: u => `/api/application/users/${u}`,
ext: u => `/api/application/users/external/${u}`
},
nodes:{
main: '/api/application/nodes',
get: n => `/api/application/nodes/${n}`,
config: n => `/api/application/nodes/${n}/configuration`,
allocations:{
main: n => `/api/application/nodes/${n}/allocations`,
get: (n, a) => `/api/application/nodes/${n}/allocations/${a}`
}
},
servers:{
main: '/api/application/servers',
get: s => `/api/application/servers/${s}`,
ext: s => `/api/application/servers/external/${s}`,
details: s => `/api/application/servers/${s}/details`,
build: s => `/api/application/servers/${s}/build`,
startup: s => `/api/application/servers/${s}/startup`,
suspend: s => `/api/application/servers/${s}/suspend`,
unsuspend: s => `/api/application/servers/${s}/unsuspend`,
reinstall: s => `/api/application/servers/${s}/reinstall`
},
locations:{
main: '/api/application/locations',
get: l => `/api/application/locations/${l}`
},
nests:{
main: '/api/application/nests',
get: n => `/api/application/nests/${n}`,
eggs:{
main: n => `/api/application/nests/${n}/eggs`,
get: (n, e) => `/api/application/nests/${n}/eggs/${e}`
}
}
}

View File

@@ -1,117 +0,0 @@
const Dict = require('../structures/Dict');
const endpoints = require('./endpoints');
class BackupManager {
constructor(client, server) {
this.client = client;
this.server = server;
/** @type {Dict<string, Backup>} */
this.cache = new Dict();
}
_patch(data) {
if (data.data) {
const s = new Dict();
for (let o of data.data) {
o = o.attributes;
this.cache.set(o.uuid, {
uuid: o.uuid,
name: o.name,
ignoredFiles: o.ignored_files,
hash: o.hash,
bytes: o.bytes,
createdAt: new Date(o.created_at),
completedAt: o.completed_at ? new Date(o.completed_at) : null
});
s.set(o.uuid, this.cache.get(o.uuid));
}
return s;
}
data = data.attributes;
this.cache.set(data.uuid, {
uuid: data.uuid,
name: data.name,
ignoredFiles: data.ignored_files,
hash: data.hash,
bytes: data.bytes,
createdAt: new Date(data.created_at),
completedAt: data.completed_at ? new Date(data.completed_at) : null
});
return this.cache.get(data.uuid);
}
/**
* Fetches a backup from the Pterodactyl API with an optional cache check.
* @param {string} [id] The UUID of the backup.
* @param {boolean} [force] Whether to skip checking the cache and fetch directly.
* @returns {Promise<Backup|Dict<string, Backup>>} The fetched backup(s).
*/
async fetch(id, force = false) {
if (id && !force) {
const b = this.cache.get(id);
if (b) return Promise.resolve(b);
}
const data = await this.client.requests.get(
id
? endpoints.servers.backups.get(this.server.identifier, id)
: endpoints.servers.backups.main(this.server.identifier)
);
return this._patch(data);
}
/**
* Creates a new backup.
* @returns {Promise<Backup>} The new backup object.
*/
async create() {
return this._patch(
await this.client.requests.post(
endpoints.servers.backups.main(this.server.identifier),
null
)
);
}
/**
* Returns a download link for the backup.
* @param {string} id The UUID of the backup.
* @returns {Promise<string>} The download link.
*/
async download(id) {
const url = await this.client.requests.get(
endpoints.servers.backups.download(this.server.identifier, id)
);
return url.attributes.url;
}
/**
* Deletes a specified backup.
* @param {string} id The UUID of the backup.
* @returns {Promise<boolean>}
*/
async delete(id) {
await this.client.requests.delete(
endpoints.servers.backups.get(this.server.identifier, id)
);
this.cache.delete(id);
return true;
}
}
module.exports = BackupManager;
/**
* Represents a server backup.
* @typedef {object} Backup
* @property {string} uuid The UUID of the backup.
* @property {string} name The name of the backup.
* @property {unknown[]} ignoredFiles An array of files ignored by the backup.
* @property {?string} hash The sha256 hash for the backup.
* @property {number} bytes The size of the backup in bytes.
* @property {Date} createdAt The date the backup was created.
* @property {?Date} completedAt The date the backup was completed.
*/

View File

@@ -1,98 +0,0 @@
const ClientServer = require('../structures/ClientServer');
const Dict = require('../structures/Dict');
const build = require('../util/query');
const endpoints = require('./endpoints');
class ClientServerManager {
/**
* Allowed include arguments for client servers.
*/
static get INCLUDES() {
return Object.freeze(['egg', 'subusers']);
}
constructor(client) {
this.client = client
/** @type {Dict<string, ClientServer>} */
this.cache = new Dict();
/** @type {PageData} */
this.pageData = {};
}
_patch(data) {
this._resolveMeta(data.meta?.pagination);
if (data.data) {
const res = new Dict();
for (const o of data.data) {
const s = new ClientServer(this.client, o);
res.set(s.identifier, s);
}
if (this.client.options.servers.cache) res.forEach((v, k) => this.cache.set(k, v));
return res;
}
const s = new ClientServer(this.client, data);
if (this.client.options.servers.cache) this.cache.set(s.identifier, s);
return s;
}
_resolveMeta(data) {
if (!data) return;
this.pageData = {
current: data.current_page,
total: data.total,
count: data.count,
perPage: data.per_page,
totalPages: data.total_pages,
links: data.links
}
}
/**
* Returns a formatted URL to the server.
* @param {string|ClientServer} server The server or identifier of the server.
* @returns {string} The formatted URL.
*/
panelURLFor(server) {
if (server instanceof ClientServer) return server.panelURL;
return `${this.client.domain}/server/${server}`;
}
/**
* Fetches a server (or all if no id is specified) from the Pterodactyl API.
* @param {string} [id] The ID of the server.
* @param {object} [options] Additional fetch options.
* @param {boolean} [options.force] Whether to skip checking the cache and fetch directly.
* @param {string[]} [options.include] Additional fetch parameters to include.
* @returns {Promise<ClientServer|Dict<string, ClientServer>>} The fetched server(s).
*/
async fetch(id, options = {}) {
if (id) {
if (!options.force) {
const s = this.cache.get(id);
if (s) return Promise.resolve(s);
}
}
const query = build(options, { includes: ClientServerManager.INCLUDES });
const data = await this.client.requests.get(
(id ? endpoints.servers.get(id) : endpoints.servers.main) + query
);
return this._patch(data);
}
}
module.exports = ClientServerManager;
/**
* @typedef {object} PageData
* @property {number} current The current page.
* @property {number} total
* @property {number} count The number of items on that page.
* @property {number} perPage The max amount of items per page.
* @property {number} totalPages The total number of pages.
* @property {object} links
*/

View File

@@ -1,162 +0,0 @@
const { EventEmitter } = require('events');
const ClientServerManager = require('./ClientServerManager');
const { ClientUser } = require('../structures/User');
const RequestManager = require('../http/RequestManager');
const ScheduleManager = require('./ScheduleManager');
const WebSocketManager = require('./ws/WebSocketManager');
const endpoints = require('./endpoints');
const loader = require('../util/configLoader');
const Shard = require('./ws/Shard');
/**
* The base class for the Pterodactyl client API.
* This operates using a Pterodactyl user access token which can be found at
* <your.domain.name/admin/api>.
*
* The access token will grant you access to your servers only, with the option
* to fetch node and API key information and establish websockets to your servers.
* @extends {EventEmitter}
*/
class PteroClient extends EventEmitter {
/**
* @param {string} domain The Pterodactyl domain.
* @param {string} auth The authentication key for Pterodactyl.
* @param {ClientOptions} [options] Additional client options.
*/
constructor(domain, auth, options = {}) {
super();
if (!/https?\:\/\/(?:localhost\:\d{4}|[\w\.\-]{3,256})/gi.test(domain))
throw new SyntaxError(
"Domain URL must start with 'http://' or 'https://' and "+
'must be bound to a port if using localhost.'
);
/**
* The domain for your Pterodactyl panel. This should be the main URL only
* (not "/api"). Any additional paths will count as the API path.
* @type {string}
*/
this.domain = domain.endsWith('/') ? domain.slice(0, -1) : domain;
/**
* The API key for your Pterodactyl panel. This should be kept private at
* all times. Full access must be granted in the panel for the whole library
* to be accessible.
* @type {string}
*/
this.auth = auth;
/**
* Additional startup options for the client (optional).
* @type {ClientOptions}
*/
this.options = loader.clientConfig(options);
/** @type {ClientUser} */
this.user = null;
/** @type {ClientServerManager} */
this.servers = new ClientServerManager(this);
/** @type {ScheduleManager} */
this.schedules = new ScheduleManager(this);
/** @type {RequestManager} @internal */
this.requests = new RequestManager('Client', this.domain, this.auth);
/** @type {WebSocketManager} */
this.ws = new WebSocketManager(this);
}
/**
* Performs preload requests to Pterodactyl and launches websocket connections.
* @returns {Promise<boolean>}
* @fires PteroClient#ready
*/
async connect() {
if (this.options.fetchClient) await this.fetchClient();
if (this.options.servers.fetch && this.options.servers.cache) await this.servers.fetch();
return true;
}
get ping() {
return this.requests._ping;
}
/**
* Fetches the client user's account. This will contain information such as 2FA
* recovery tokens, API keys and email data.
* @returns {Promise<ClientUser>} The client user.
*/
async fetchClient() {
const data = await this.requests.get(endpoints.account.main);
this.user = new ClientUser(this, data.attributes);
return this.user;
}
/**
* Adds a server or an array of servers to be connected to websockets.
* @param {string[] | string} ids The identifier(s) of the server.
* @returns {Shard|Shard[]} Created (or reused) shard(s).
*/
addSocketServer(ids) {
if (typeof ids === 'string')
return this.ws.createShard(ids);
else if (ids instanceof Array)
return ids.map(id => this.ws.createShard(id))
}
/**
* Removes a server from websocket connections.
* @param {string} id The identifier of the server.
* @returns {boolean} Whether shard was removed.
*/
removeSocketServer(id) {
return this.ws.removeShard(id);
}
/**
* Closes any existing websocket connections.
* @returns {void}
*/
disconnect() {
if (this.ws.readyAt) this.ws.destroy();
}
}
module.exports = PteroClient;
/**
* @typedef {object} OptionSpec
* @property {boolean} fetch
* @property {boolean} cache
* @property {number} max
*/
/**
* Startup options for the client API.
* @typedef {object} ClientOptions
* @property {boolean} [fetchClient] Whether to fetch the client user (default `true`).
* @property {OptionSpec} [servers] Options for fetching and caching servers.
* @property {OptionSpec} [subUsers] Options for fetching and caching server subusers.
* @property {string[]} [disableEvents] An array of events to disable (wont be emitted).
*/
/**
* Debug event emitted for websocket events.
* @event PteroClient#debug
* @param {string} message The message emitted with the event.
*/
/**
* Emitted when the websocket encounters an error.
* @event PteroClient#error
* @param {*} error The error received.
*/
/**
* Emitted when the websocket manager is ready.
* @event PteroClient#ready
*/

View File

@@ -1,142 +0,0 @@
const Schedule = require('../structures/Schedule');
const Dict = require('../structures/Dict');
const endpoints = require('./endpoints');
class ScheduleManager {
constructor(client) {
this.client = client;
/** @type {Dict<string, Dict<number, Schedule>>} */
this.cache = new Dict();
}
_patch(id, data) {
if (data.data) {
const res = new Dict();
for (const o of data.data) {
const s = new Schedule(this.client, id, o);
res.set(s.id, s);
}
let c = this.cache.get(id);
if (c) res.forEach((v, k) => c.set(k, v)); else c = res;
this.cache.set(id, c);
return res;
}
const s = new Schedule(this.client, id, data);
let c = this.cache.get(id);
if (c) c.set(s.id, s); else c = new Dict().set(s.id, s);
this.cache.set(id, c);
return s;
}
/**
* Returns a formatted URL to the schedule.
* @param {string} id The identifier of the server.
* @param {string|Schedule} schedule The schedule or identifier of the schedule.
* @returns {string} The formatted URL.
*/
panelURLFor(id, schedule) {
if (schedule instanceof Schedule) return schedule.panelURL;
return `${this.client.domain}/server/${id}/schedules/${schedule}`;
}
/**
* Fetches a schedule or all schedules from a specified server (with optional cache check).
* @param {string} server The identifier of the server.
* @param {string} [id] The ID of the schedule.
* @param {boolean} [force] Whether to skip checking the cache and fetch directly.
* @returns {Promise<Schedule|Dict<number, Schedule>>} The fetched schedule(s).
*/
async fetch(server, id, force) {
if (id && !force) {
const s = this.cache.get(server)?.get(id);
if (s) return s;
}
const data = await this.client.requests.get(
id
? endpoints.servers.schedules.get(id)
: endpoints.servers.schedules.main
);
return this._patch(server, data);
}
/**
* Creates a new schedule for a specified server.
* @param {string} server The identifier of the server to create the schedule for.
* @param {object} options Schedule creation options.
* @param {string} options.name The name of the schedule.
* @param {boolean} options.active Whether the schedule should be active when created.
* @param {string} options.minute The minute interval (in cron syntax).
* @param {string} options.hour The hour interval (in cron syntax).
* @param {string} [options.dayOfWeek] The day of the week interval (in cron syntax).
* @param {string} [options.dayOfMonth] The day of the month interval (in cron syntax).
* @returns {Promise<Schedule>} The new schedule.
*/
async create(server, options = {}) {
if (Object.keys(options).length < 4)
throw new Error('Missing required Schedule creation option.');
const payload = {};
payload.name = options.name;
payload.is_active = options.active;
payload.minute = options.minute;
payload.hour = options.hour;
payload.day_of_week = options.dayOfWeek || '*';
payload.day_of_month = options.dayOfMonth || '*';
const data = await this.client.requests.post(
endpoints.servers.schedules.main(server), payload
);
return this._patch(data);
}
/**
* Updates a schedule for a specified server.
* @param {string} server The server identifier of the schedule.
* @param {number} id The ID of the schedule.
* @param {object} options Schedule creation options.
* @param {string} [options.name] The name of the schedule.
* @param {boolean} [options.active] Whether the schedule should be active when created.
* @param {string} [options.minute] The minute interval (in cron syntax).
* @param {string} [options.hour] The hour interval (in cron syntax).
* @param {string} [options.dayOfWeek] The day of the week interval (in cron syntax).
* @param {string} [options.dayOfMonth] The day of the month interval (in cron syntax).
* @returns {Promise<Schedule>} The updated schedule instance.
*/
async update(server, id, options = {}) {
if (!Object.keys(options).length) throw new Error('Too few options to update.');
const sch = await this.fetch(server, id);
const payload = {};
payload.name = options.name || sch.name;
payload.is_active = options.active ?? sch.active;
payload.minute = options.minute || sch.cron.minute;
payload.hour = options.hour || sch.cron.hour;
payload.day_of_week = options.dayOfWeek || sch.cron.week;
payload.day_of_month = options.dayOfMonth || sch.cron.month;
const data = await this.client.requests.post(
endpoints.servers.schedules.get(server, id), payload
);
return this._patch(data);
}
/**
* Deletes a schedule from a specified server.
* @param {string} server The server identifier of the schedule.
* @param {number} id The ID of the schedule.
* @returns {Promise<boolean>}
*/
async delete(server, id) {
await this.client.requests.delete(
endpoints.servers.schedules.get(server, id)
);
this.cache.get(server)?.delete(id);
return true;
}
}
module.exports = ScheduleManager;

View File

@@ -1,129 +0,0 @@
const { PteroSubUser } = require('../structures/User');
const Permissions = require('../structures/Permissions');
const { PermissionResolvable } = require('../structures/Permissions');
const Dict = require('../structures/Dict');
const endpoints = require('./endpoints');
class SubUserManager {
constructor(client, server) {
this.client = client;
this.server = server;
/** @type {Dict<string, PteroSubUser>} */
this.cache = new Dict();
}
_patch(data) {
if (data.data) {
const s = new Dict();
for (let o of data.data) {
o = o.attributes;
const u = new PteroSubUser(this.client, this.server.identifier, o);
s.set(u.uuid, u);
}
if (this.client.options.subUsers.cache) s.forEach((v, k) => this.cache.set(k, v));
return s;
}
const u = new PteroSubUser(this.client, this.server.identifier, data.attributes);
if (this.client.options.subUsers.cache) this.cache.set(u.uuid, u);
return u;
}
/**
* Resolves a subuser from an object. This can be:
* * a string
* * a number
* * an object
*
* Returns `undefined` if not found.
* @param {string|number|object|PteroSubUser} obj The object to resolve from.
* @returns {?PteroSubUser} The resolved subuser.
*/
resolve(obj) {
if (obj instanceof PteroSubUser) return obj;
if (typeof obj === 'number') return this.cache.get(obj);
if (typeof obj === 'string') return this.cache.find(s => s.name === obj);
if (obj.relationships?.user) return this._patch(obj.relationships.user);
return undefined;
}
/**
* Returns a formatted URL to the subuser.
* @returns {string} The formatted URL.
*/
get panelURL() {
return `${this.client.domain}/server/${this.server.identifier}/users`;
}
/**
* Fetches a server subuser from the Pterodactyl API with an optional cache check.
* @param {string} [id] The UUID of the user.
* @param {boolean} [force] Whether to skip checking the cache and fetch directly.
* @returns {Promise<PteroSubUser|Dict<string, PteroSubUser>>} The fetched user(s).
*/
async fetch(id, force = false) {
if (id && !force) {
const u = this.cache.get(id);
if (u) return Promise.resolve(u);
}
const data = await this.client.requests.get(
id
? endpoints.servers.users.get(this.server.identifier, id)
: endpoints.servers.users.main(this.server.identifier)
);
return this._patch(data);
}
/**
* Adds a specified user to the server.
* @param {string} email The email of the associated account.
* @param {PermissionResolvable} permissions Permissions for the account.
* @returns {Promise<PteroSubUser>} The new subuser.
*/
async add(email, permissions) {
if (typeof email !== 'string') throw new Error('Email must be a string.');
const perms = new Permissions(permissions).toStrings();
if (!perms.length) throw new Error('Need at least 1 permission for the subuser.');
const data = await this.client.requests.post(
endpoints.servers.users.main(this.server.identifier),
{ email, permissions: perms }
);
return this._patch(data);
}
/**
* Updates the specified subuser's server permissions.
* @param {string} uuid The UUID of the subuser.
* @param {PermissionResolvable} permissions Permissions for the subuser.
*/
async setPermissions(uuid, permissions) {
const perms = new Permissions(permissions).toStrings();
if (!perms.length) throw new Error('Need at least 1 permission for the subuser.');
const data = await this.client.requests.post(
endpoints.servers.users.get(this.server.identifier, uuid),
{ permissions: perms }
);
return this._patch(data);
}
/**
* Removes the specified subuser from the server.
* @param {string} id The UUID of the subuser.
* @returns {Promise<boolean>}
*/
async remove(id) {
await this.client.requests.delete(
endpoints.servers.users.get(this.server.identifier, id)
);
this.cache.delete(id);
return true;
}
}
module.exports = SubUserManager;

View File

@@ -1,67 +0,0 @@
module.exports = {
account:{
main: '/api/client/account',
tfa: '/api/client/account/two-factor',
email: '/api/client/account/email',
password: '/api/client/account/password',
apikeys: '/api/client/account/api-keys'
},
servers:{
main: '/api/client',
get: s => `/api/client/servers/${s}`,
databases:{
main: s => `/api/client/servers/${s}/databases`,
get: (s, id) => `/api/client/servers/${s}/databases/${id}`,
rotate: (s, id) => `/api/client/servers/${s}/databases/${id}/rotate-password`
},
files:{
main: s => `/api/client/servers/${s}/files/list`,
contents: (s, f) => `/api/client/servers/${s}/files/contents?file=${f}`,
download: (s, f) => `/api/client/servers/${s}/files/download?file=${f}`,
rename: s => `/api/client/servers/${s}/files/rename`,
copy: s => `/api/client/servers/${s}/files/copy`,
write: (s, f) => `/api/client/servers/${s}/files/write?file=${f}`,
compress: s => `/api/client/servers/${s}/files/compress`,
decompress: s => `/api/client/servers/${s}/files/decompress`,
delete: s => `/api/client/servers/${s}/files/delete`,
create: s => `/api/client/servers/${s}/files/create-folder`,
upload: s => `/api/client/servers/${s}/files/upload`
},
schedules:{
main: s => `/api/client/servers/${s}/schedules`,
get: (s, id) => `/api/client/servers/${s}/schedules/${id}`,
tasks:{
main: (s, id) => `/api/client/servers/${s}/schedules/${id}/tasks`,
get: (s, id, t) => `/api/client/servers/${s}/schedules/${id}/tasks/${t}`
}
},
network:{
main: s => `/api/client/servers/${s}/network/allocations`,
get: (s, id) => `/api/client/servers/${s}/network/allocations/${id}`,
primary: (s, id) => `/api/client/servers/${s}/network/allocations/${id}/primary`
},
users:{
main: s => `/api/client/servers/${s}/users`,
get: (s, id) => `/api/client/servers/${s}/users/${id}`
},
backups:{
main: s => `/api/client/servers/${s}/backups`,
get: (s, id) => `/api/client/servers/${s}/backups/${id}`,
download: (s, id) => `/api/client/servers/${s}/backups/${id}/download`
},
startup:{
get: s => `/api/client/servers/${s}/startup`,
var: s => `/api/client/servers/${s}/startup/variable`
},
settings:{
rename: s => `/api/client/servers/${s}/settings/rename`,
reinstall: s => `/api/client/servers/${s}/settings/reinstall`
},
ws: s => `/api/client/servers/${s}/websocket`,
resources: s => `/api/client/servers/${s}/resources`,
command: s => `/api/client/servers/${s}/command`,
power: s => `/api/client/servers/${s}/power`
},
main: '/api/client',
permissions: '/api/client/permissions'
}

View File

@@ -1,172 +0,0 @@
const EventEmitter = require('events');
const WebSocket = require('ws');
const handle = require('./packetHandler');
const endpoints = require('../endpoints');
class Shard extends EventEmitter {
constructor(client, id) {
super();
this.client = client;
this.id = id;
this.token = null;
this.socket = null;
this.status = 'CLOSED';
this.readyAt = 0;
this.ping = -1;
this.lastPing = 0;
}
/**
* Emit `debug` client event with given message
* @param {string} message Message text
*/
#debug(message) {
this.emit('debug', `[SHARD ${this.id}] ${message}`);
}
/**
* Initialize connection and resolve after full authentication
* @param {WebSocketAuth?} auth WebSocket URL to connect to
* @returns {Promise<WebSocket>} WebSocket connection
*/
connect(auth) {
return new Promise(async (resolve, reject) => {
if (!['CLOSED', 'RECONNECTING'].includes(this.status)) return;
if (this.socket) this.socket = null;
if (!auth)
({ data: auth } = await this.client.requests.get(endpoints.servers.ws(this.id)));
this.socket = new WebSocket(auth.socket);
this.status = 'CONNECTING';
this.socket.on('open', () => this._onOpen());
this.socket.on('message', data => this._onMessage(data.toString()));
this.socket.on('error', error => this._onError(error));
this.socket.on('close', () => this._onClose());
this.token = auth.token;
this.once('authSuccess', () => {
this.emit('serverConnect', this.socket);
resolve(this.socket);
});
})
}
/**
* Close socket connection and start a new one
* @returns {Promise<WebSocket>} WebSocket connection
*/
async reconnect() {
if (this.status === 'RECONNECTING') return;
this.status = 'RECONNECTING';
this.socket.close(4009, 'pterojs::reconnect');
const { data } = await this.client.requests.get(endpoints.servers.ws(this.id));
return this.connect(data.socket, data.token);
}
/**
* Close socket connection
*/
disconnect() {
return new Promise(async (resolve, reject) => {
if (!this.readyAt) return reject('Socket is not connected');
this.once('serverDisconnect', resolve);
this.socket.close(1000, 'pterojs::disconnect');
this.readyAt = 0;
this.lastPing = 0;
this.ping = -1;
});
}
/**
* Get a new token from API and send it to active socket connection
* @returns {Promise<void>}
*/
async refreshToken() {
return new Promise(async (resolve, reject) => {
if (this.status !== 'CONNECTED') return reject('Socket is not connected');
// using this transitional property to avoid double token issuing during init
if (!this.token) {
const { data } = await this.client.requests.get(endpoints.servers.ws(this.id));
this.token = data.token;
}
this.send('auth', this.token);
this.token = null;
this.lastPing = Date.now();
this.once('authSuccess', () => resolve(this.socket));
});
}
/**
* Send a message to socket server
* @param {string} event Name of event
* @param {any|any[]|undefined} args Event data
*/
send(event, args) {
if (!this.socket) throw new Error('Socket for this shard is unavailable.');
if (!Array.isArray(args)) args = [args];
this.#debug(`Sending event '${event}'`);
this.socket.send(JSON.stringify({ event, args }));
}
_onOpen() {
this.status = 'CONNECTED';
this.readyAt = Date.now();
this.refreshToken();
this.#debug('Connection opened');
}
_onMessage(data) {
if (!data) return this.#debug('Received a malformed packet');
data = JSON.parse(data);
this.emit('rawPayload', data);
switch (data.event) {
case 'auth success':
this.ping = Date.now() - this.lastPing;
this.emit('authSuccess');
break;
case 'token expiring':
this.refreshToken();
this.#debug('Auth token refreshed');
return;
case 'token expired':
this.reconnect();
break;
}
handle(this, data, this.id);
}
_onError(error) {
if (!error) return;
this.#debug(`Error received: ${error}`);
}
_onClose() {
this.status = 'CLOSED';
this.emit('serverDisconnect');
this.#debug('Connection closed');
}
}
/**
* @typedef {object} WebSocketAuth
* @property {string} token
* @property {string} socket
*/
module.exports = Shard;

View File

@@ -1,64 +0,0 @@
const Shard = require('./Shard');
class WebSocketManager {
constructor(client) {
this.client = client;
/**
* A map of active server shards.
* @type {Map<string, Shard>}
*/
this.shards = new Map();
this.totalShards = 0;
this.readyAt = 0;
}
destroy() {
if (!this.readyAt) return;
for (const shard of this.shards.values()) shard.disconnect();
this.shards.clear();
this.readyAt = 0;
this.client.emit('debug', `[WS] Destroyed ${this.totalShards} shard(s)`);
this.totalShards = 0;
}
get ping() {
if (!this.totalShards) return -1;
let sum = 0;
for (const shard of this.shards.values()) sum += shard.ping;
return sum / this.totalShards;
}
/**
* Adds a server to be connected to websockets.
* @param {string} id The identifier of the server.
* @returns {Shard} Created (or reused) shard.
*/
createShard(id) {
if (this.shards.has(id))
return this.shards.get(id);
const shard = new Shard(this.client, id);
this.shards.set(id, shard);
this.totalShards++;
return shard;
}
/**
* Removes a server from websocket connections.
* @param {string} id The identifier of the server.
* @returns {boolean} Whether shard was removed.
*/
removeShard(id) {
if (!this.shards.has(id))
return false;
this.shards.delete(id);
this.totalShards--;
return true;
}
}
module.exports = WebSocketManager;

View File

@@ -1,52 +0,0 @@
const caseConv = require('../../util/caseConv');
function handle(shard, { event, args }) {
if (!Array.isArray(args)) args = [args];
switch (event) {
case 'auth success':
return shard.emit('authSuccess');
case 'status':
return shard.emit('statusUpdate', ...args);
case 'console output':
return shard.emit('serverOutput', ...args);
case 'daemon message':
return shard.emit('daemonMessage', ...args);
case 'install started':
return shard.emit('installStart');
case 'install output':
return shard.emit('installOutput', ...args);
case 'install completed':
return shard.emit('installComplete');
case 'stats':
const stats = JSON.parse(...args);
stats.network = caseConv.camelCase(stats.network);
return shard.emit('statsUpdate', caseConv.camelCase(stats));
case 'transferLogs':
case 'transferStatus':
return shard.emit('transferUpdate', ...args);
case 'backup completed':
return shard.emit('backupComplete', args ?? {});
case 'token expired':
return shard.emit('serverDisconnect');
case 'daemon error':
case 'jwt error':
return shard.emit('error', ...args);
default:
return shard.emit('debug', `[SHARD ${id}] Received unknown event: '${event}'`);
}
}
module.exports = handle;

View File

@@ -1,164 +0,0 @@
const { EventEmitter } = require('events');
const fetch = require('node-fetch');
const caseConv = require('../../util/caseConv');
class NodeStatus extends EventEmitter {
headers = {
'Content-Type': 'application/json',
'Accept': 'application/json',
'User-Agent': 'NodeStatus PteroJS v1.0.3'
}
#interval = null;
#connected = new Set();
/**
* @param {StatusOptions} options
*/
constructor(options) {
super();
Object.assign(this, options);
if (!/https?\:\/\/(?:localhost\:\d{4}|[\w\.\-]{3,256})/gi.test(this.domain))
throw new SyntaxError(
"Domain URL must start with 'http://' or 'https://' and "+
'must be bound to a port if using localhost.'
);
this.headers['Authorization'] = 'Bearer '+ options.auth;
this.nextInterval ||= 5;
this.retryLimit ||= 0;
/** @type {null | (id: number) => void} */
this.onConnect = null;
/** @type {null | (d: object) => void} */
this.onInterval = null;
/** @type {null | (id: number) => void} */
this.onDisconnect = null;
this.ping = -1;
this.current = 0;
this.readyAt = 0;
if (this.nodes.some(i => typeof i !== 'number'))
throw new TypeError('[NS] Node IDs must be numbers only.');
if (this.callInterval < 10_000 || this.callInterval > 43_200_000)
throw new RangeError('[NS] Call interval must be between 10 seconds and 12 hours.');
if (this.nextInterval >= this.callInterval)
throw new RangeError('[NS] Next interval must be less than the call interval.');
}
#debug(message) { this.emit('debug', '[NS] '+ message) }
async connect() {
if (this.readyAt) throw new Error('Process already running.');
this.#debug('Starting connection to API');
await this.#ping();
await this.#handleNext();
this.#interval = setInterval(() => this.#handleNext(), this.callInterval).unref();
this.readyAt = Date.now();
process.on('SIGINT', _ => this.close());
process.on('SIGTERM', _ => this.close());
}
async #ping() {
const start = Date.now();
const res = await fetch(`${this.domain}/api/application`, {
method: 'GET', headers: this.headers
});
if (res.status === 401)
return this.close(
'[NS:401] Invalid API credentials. Contact your panel administrator.',
true
);
if (res.status === 403) return this.close('[NS:403] Missing access.', true);
this.ping = Date.now() - start;
const data = await res.json().catch(()=>{});
if (data?.errors?.length) return;
return this.close('[NS:404] Application API is unavailable.', true);
}
async #handleNext() {
for (let i=0; i<this.nodes.length; i++) {
await this.#request(this.nodes[i]);
if (this.nodes[i+1]) {
await new Promise(res => setTimeout(res, this.nextInterval).unref());
}
}
}
async #request(id) {
this.#debug(`Fetching: /api/application/nodes/${id}`);
const res = await fetch(
`${this.domain}/api/application/nodes/${id}`, {
method: 'GET', headers: this.headers
});
if (!res.ok) {
if (res.status === 401)
return this.close(
'[NS:401] Invalid API credentials. Contact your panel administrator.',
true
);
if (res.status === 403) return this.close('[NS:403] Missing access.', true);
if (res.status === 404) {
if (this.#connected.has(id)) {
this.emit('disconnect', id);
if (this.onDisconnect) this.onDisconnect(id);
this.#connected.delete(id);
}
return;
}
if (this.current > this.retryLimit)
return this.close('[NS] Maximum retry limit exceeded.');
this.current++;
this.#debug('Attempting retry fetch');
this.#request(id);
return;
}
let { attributes } = await res.json();
attributes = caseConv.camelCase(attributes);
if (!this.#connected.has(id)) {
this.#connected.add(id);
this.emit('connect', id);
if (this.onConnect !== null) this.onConnect(id);
}
this.emit('interval', attributes);
if (this.onInterval !== null) this.onInterval(attributes);
}
close(message = 'None', error = false) {
if (!this.readyAt) return;
this.#debug('Closing connection');
if (this.#interval) clearInterval(this.#interval);
this.removeAllListeners();
this.#connected.clear();
if (error && message) throw new Error(message);
}
}
module.exports = NodeStatus;
/**
* @typedef {object} StatusOptions
* @property {string} domain The domain for the API.
* @property {string} auth The API key authorization.
* @property {number[]} nodes An array of node IDs to listen for.
* @property {number} callInterval The interval to wait between API calls (between 10-6000 seconds).
* @property {?number} nextInterval The interval to wait between processing checks. Must be less than the callInterval.
* @property {?number} retryLimit The amount of times to retry fetching the API.
*/

View File

@@ -1,982 +0,0 @@
// Typedefs for PteroJS classes, structures and extensions
import { EventEmitter } from 'events';
import WebSocket from 'ws';
export interface FetchOptions {
force?: boolean;
}
export type Include<B> = { include?: string[] } & B;
export interface OptionSpec {
fetch: boolean;
cache: boolean;
max: boolean;
}
// Application API
export interface ApplicationServerCreateOptions {
name: string;
egg: string;
image: string;
startup: string;
env: { [key: string]: any };
allocation: number;
limits?: { [key: string]: any };
featureLimits?: { [key: string]: any };
}
export class ApplicationServerManager {
static get FILTERS(): Readonly<string[]>;
static get INCLUDES(): Readonly<string[]>;
static get SORTS(): Readonly<string[]>;
constructor(client: PteroApp);
client: PteroApp;
get defaultLimits(): { [key: string]: number };
get defaultFeatureLimits(): { [key: string]: number };
_patch(data: any): ApplicationServer | Dict<number, ApplicationServer>;
resolve(obj: string | number | object | ApplicationServer): ApplicationServer | undefined;
panelURLFor(server: string | ApplicationServer): string;
adminURLFor(server: number | ApplicationServer): string;
fetch(id?: number, options?: Include<FetchOptions>): Promise<ApplicationServer | Dict<number, ApplicationServer>>;
query(entity: string, filter?: string, sort?: string): Promise<Dict<number, ApplicationServer>>;
create(user: number | PteroUser, options: ApplicationServerCreateOptions): Promise<ApplicationServer>;
delete(server: number | ApplicationServer, force?: boolean): Promise<boolean>;
}
export class NestEggsManager {
static get INCLUDES(): Readonly<string[]>;
constructor(client: PteroApp);
client: PteroApp;
cache: Dict<number, object>;
adminURLFor(id: number): string;
fetch(nest: number, id?: number, options?: Include<FetchOptions>): Promise<Dict<number, object>>;
for(nest: number): object[];
}
export interface Nest {
id: number;
uuid: string;
author: string;
name: string;
description: string;
createdAt: Date;
updatedAt: Date | null;
}
export class NestManager {
static get INCLUDES(): Readonly<string[]>;
constructor(client: PteroApp);
client: PteroApp;
cache: Set<Nest>;
eggs: NestEggsManager;
_patch(data: any): Set<Nest>;
adminURLFor(id: number): string;
fetch(id: number, include: Include<{}>): Promise<Set<Nest>>;
}
export interface NodeAllocation {
id: number;
ip: string;
alias: string | null;
port: number;
notes: string | null;
assigned: boolean;
}
export class NodeAllocationManager {
static get INCLUDES(): Readonly<string[]>;
constructor(client: PteroApp);
client: PteroApp;
cache: Dict<number, NodeAllocation>;
_patch(node: number, data: any): Dict<number, NodeAllocation>;
adminURLFor(id: number): string;
fetch(node: number, options: Include<FetchOptions>): Promise<Dict<number, NodeAllocation>>;
fetchAvailable(node: number, single?: boolean): Promise<Dict<number, NodeAllocation> | NodeAllocation | void>;
create(node: number, ip: string, ports: string[]): Promise<void>;
delete(node: number, id: number): Promise<boolean>;
}
export interface NodeLocation {
id: number;
long: string;
short: string;
createdAt: Date;
updatedAt: Date | null;
}
export class NodeLocationManager {
static get FILTERS(): Readonly<string[]>;
static get INCLUDES(): Readonly<string[]>;
constructor(client: PteroApp);
client: PteroApp;
cache: Dict<number, NodeLocation>;
_patch(data: any): NodeLocation | Dict<number, NodeLocation>;
resolve(obj: string | number | object): NodeLocation | undefined;
adminURLFor(id: number): string;
fetch(id?: number, options?: Include<FetchOptions>): Promise<NodeLocation | Dict<number, NodeLocation>>;
query(entity: string, filter?: string, sort?: string): Promise<Dict<number, NodeLocation>>;
create(short: string, long: string): Promise<NodeLocation>;
update(id: number, options:{ short?: string; long?: string }): Promise<NodeLocation>;
delete(id: number): Promise<boolean>;
}
export interface NodeCreateOptions {
name: string;
location: number;
fqdn: string;
scheme: string;
memory: number;
disk: number;
sftp:{
port: number;
listener: number;
};
upload_size?: number;
memory_overallocate?: number;
disk_overallocate?: number;
}
export class NodeManager {
static get FILTERS(): Readonly<string[]>;
static get INCLUDES(): Readonly<string[]>;
static get SORTS(): Readonly<string[]>;
constructor(client: PteroApp);
client: PteroApp;
cache: Dict<number, Node>;
_patch(data: any): Node | Dict<number, Node>;
resolve(obj: string | number | object | Node): Node | undefined;
adminURLFor(node: number | Node): string;
fetch(id: number, options?: Include<FetchOptions>): Promise<Node | Dict<number, Node>>;
query(entity: string, filter?: string, sort?: string): Promise<Dict<number, Node>>;
create(options: NodeCreateOptions): Promise<Node>;
update(node: number | Node, options: Partial<NodeCreateOptions>): Promise<Node>;
delete(node: number | Node): Promise<boolean>;
}
export interface ApplicationOptions {
users?: OptionSpec;
nodes?: OptionSpec;
nests?: OptionSpec;
servers?: OptionSpec;
locations?: OptionSpec;
}
export class PteroApp {
constructor(domain: string, auth: string, options?: ApplicationOptions);
domain: string;
auth: string;
options: ApplicationOptions;
users: UserManager;
nodes: NodeManager;
nests: NestManager;
servers: ApplicationServerManager;
locations: NodeLocationManager;
allocations: NodeAllocationManager;
requests: RequestManager;
get ping(): number;
connect(): Promise<boolean>;
}
export interface UserCreateOptions {
email: string;
username: string;
firstname: string;
lastname: string;
password?: string;
}
export class UserManager {
static get FILTERS(): Readonly<string[]>;
static get SORTS(): Readonly<string[]>;
constructor(client: PteroApp);
client: PteroApp;
cache: Dict<number, PteroUser>;
_patch(data: any): PteroUser | Dict<number, PteroUser>;
resolve(obj: string | number | object | PteroUser): PteroUser | undefined;
adminURLFor(user: number | PteroUser): string;
fetch(id?: number, options?: { withServers?: boolean } & FetchOptions): Promise<PteroUser | Dict<number, PteroUser>>;
fetchExternal(id: number, options?: { withServers?: boolean } & FetchOptions): Promise<PteroUser>;
query(entity: string, filter?: string, sort?: string): Promise<Dict<number, PteroUser>>;
create(email: string, username: string, firstname: string, lastname: string): Promise<PteroUser>;
update(user: number | PteroUser, options: Partial<UserCreateOptions>): Promise<PteroUser>;
delete(user: number | PteroUser): Promise<boolean>;
}
// Client API - Websockets
export interface WebSocketAuth {
token: string;
socket: string;
}
export type WebSocketStatus =
| 'CLOSED'
| 'CONNECTING'
| 'RECONNECTING'
| 'CONNECTED';
export interface ShardCommands {
'auth': [token: string]
'send stats': []
'send logs': []
'set state': [state: PowerState]
'send command': [command: string]
}
export interface ServerStats {
cpuAbsolute: number;
diskBytes: number;
memoryBytes: number;
memoryLimitBytes: number;
network: {
rxBytes: number;
txBytes: number;
};
state: string;
uptime: number;
}
export interface ShardEvents {
debug: [message: string];
error: [id: string, error: any];
tokenRefresh: [];
authSuccess: [];
serverConnect: [socket: WebSocket];
serverOutput: [output: string];
serverDisconnect: [];
statusUpdate: [status: string];
statsUpdate: [stats: ServerStats];
transferUpdate: [data: any];
installStart: [];
installOutput: [output: string];
installComplete: [];
backupComplete: [backup: Partial<Backup>];
daemonMessage: [message: any];
}
export class Shard extends EventEmitter {
constructor(client: PteroClient, id: string, auth: WebSocketAuth);
client: PteroClient;
id: string;
token: string | null;
socket: WebSocket | null;
status: WebSocketStatus;
readyAt: number;
ping: number;
lastPing: number;
#debug(message: string): void;
connect(auth?: WebSocketAuth): Promise<WebSocket>;
reconnect(): Promise<WebSocket>;
refreshToken(): Promise<void>;
disconnect(): Promise<void>;
send<K extends keyof ShardCommands>(event: K, args: ShardCommands[K]): void;
_onOpen(): void;
_onMessage({ data }:{ data: string }): void;
_onError({ error }: any): void;
_onClose(): void;
emit<E extends keyof ShardEvents>(event: E, ...args: ShardEvents[E]): boolean;
on<E extends keyof ShardEvents>(event: E, listener: (...args: ShardEvents[E]) => any): this;
on<T, E extends keyof ShardEvents>(event: E, listener: (...args: ShardEvents[E]) => T): this;
once<E extends keyof ShardEvents>(event: E, listener: (...args: ShardEvents[E]) => any): this;
once<T, E extends keyof ShardEvents>(events: E, listener: (...args: ShardEvents[E]) => T): this;
off<E extends keyof ShardEvents>(event: E, listener: (...args: ShardEvents[E]) => any): this;
off<T, E extends keyof ShardEvents>(event: E, listener: (...args: ShardEvents[E]) => T): this;
}
export class WebSocketManager {
constructor(client: PteroClient);
client: PteroClient;
servers: string[];
shards: Map<string, Shard>;
totalShards: number;
readyAt: number;
ping: number;
destroy(): void;
createShard(id: string): Shard;
removeShard(id: string): boolean;
}
// Client API - Main
export interface Backup {
uuid: string;
name: string;
ignoredFiles: string[];
hash: string | null;
bytes: number;
createdAt: Date;
completedAt: Date | null;
}
export class BackupManager {
constructor(client: PteroClient, server: ClientServer);
client: PteroClient;
server: ClientServer;
cache: Dict<string, Backup>;
_patch(data: any): Backup | Dict<string, Backup>;
fetch(id?: string, force?: boolean): Promise<Backup | Dict<string, Backup>>;
create(): Promise<Backup>;
download(id: string): Promise<string>;
delete(id: string): Promise<boolean>;
}
export interface ClientDatabase {
id: string;
host:{
address: string;
port: number;
};
name: string;
username: string;
password: string | null;
connections: string;
maxConnections: number;
}
export class ClientDatabaseManager {
constructor(client: PteroClient, server: ClientServer);
client: PteroClient;
server: ClientServer;
cache: Dict<string, ClientDatabase>;
_patch(data: any): Dict<string, ClientDatabase>;
get panelURL(): string;
fetch(withPass?: boolean): Promise<Dict<string, ClientDatabase>>;
create(database: string, remote: string): Promise<ClientDatabase>;
rotate(id: string): Promise<ClientDatabase>;
delete(id: string): Promise<boolean>;
}
export interface PageData {
current: number;
total: number;
count: number;
perPage: number;
totalPages: number;
links: object;
}
export class ClientServerManager {
static get INCLUDES(): Readonly<string[]>;
constructor(client: PteroClient);
client: PteroClient;
cache: Dict<string, ClientServer>;
pageData: PageData;
_patch(data: any): ClientServer | Dict<string, ClientServer>;
_resolveMeta(data: any): void;
panelURLFor(server: string | ClientServer): string;
fetch(id: string, options?: Include<FetchOptions>): Promise<ClientServer | Dict<string, ClientServer>>;
}
export interface PteroFile {
name: string;
mode: string;
modeBits: bigint;
size: number;
isFile: boolean;
isSymlink: boolean;
isEditable: boolean | undefined;
mimetype: string;
createdAt: Date;
modifiedAt: Date | null;
}
export class FileManager {
constructor(client: PteroClient, server: ClientServer);
client: PteroClient;
server: ClientServer;
cache: Dict<string, Dict<string, PteroFile>>;
_patch(data: any): Dict<string, PteroFile>;
get panelURL(): string;
fetch(dir: string): Promise<Dict<string, PteroFile>>;
getContents(filePath: string): Promise<string>;
download(filePath: string): Promise<string>;
rename(filePath: string, name: string): Promise<void>;
copy(filePath: string): Promise<void>;
write(filePath: string, content: string | Buffer): Promise<void>;
compress(dir: string, files: string[]): Promise<PteroFile>;
decompress(dir: string, file: string): Promise<void>;
delete(dir: string, files: string[]): Promise<void>;
createFolder(dir: string, name: string): Promise<void>;
getUploadURL(): Promise<string>;
}
export interface NetworkAllocation {
id: number;
ip: string;
ipAlias: string | null;
port: number;
notes: string | null;
isDefault: boolean;
}
export class NetworkAllocationManager {
constructor(client: PteroClient, server: ClientServer);
client: PteroClient;
server: ClientServer;
cache: Dict<number, NetworkAllocation>;
_patch(data: any): Dict<number, NetworkAllocation>;
fetch(): Promise<Dict<number, NetworkAllocation>>;
assign(): Promise<NetworkAllocation>;
setNote(id: number, notes: string): Promise<NetworkAllocation>;
setPrimary(id: number): Promise<NetworkAllocation>;
unassign(id: number): Promise<true>;
}
export interface ClientOptions {
ws?: boolean;
fetchClient?: boolean;
servers?: OptionSpec;
subUsers?: OptionSpec;
disableEvents?: string[];
}
export interface ClientEvents {
debug: [message: string];
error: [id: string, error: any];
ready: [];
}
export class PteroClient extends EventEmitter {
constructor(domain: string, auth: string, options?: ClientOptions);
domain: string;
auth: string;
options: ClientOptions;
user: ClientUser | null;
servers: ClientServerManager;
schedules: ScheduleManager;
requests: RequestManager;
ws: WebSocketManager;
ping: number;
connect(): Promise<boolean>;
fetchClient(): Promise<ClientUser>;
addSocketServer<T extends string | string[]>(ids: T): T extends string[] ? Shard[] : Shard;
removeSocketServer(id: string): boolean;
disconnect(): void;
emit<E extends keyof ClientEvents>(event: E, ...args: ClientEvents[E]): boolean;
on<E extends keyof ClientEvents>(event: E, listener: (...args: ClientEvents[E]) => any): this;
on<T, E extends keyof ClientEvents>(event: E, listener: (...args: ClientEvents[E]) => T): this;
once<E extends keyof ClientEvents>(event: E, listener: (...args: ClientEvents[E]) => any): this;
once<T, E extends keyof ClientEvents>(events: E, listener: (...args: ClientEvents[E]) => T): this;
off<E extends keyof ClientEvents>(event: E, listener: (...args: ClientEvents[E]) => any): this;
off<T, E extends keyof ClientEvents>(event: E, listener: (...args: ClientEvents[E]) => T): this;
}
export interface ScheduleCreateOptions {
name: string;
active: boolean;
minute: string;
hour: string;
dayOfWeek?: string;
dayOfMonth?: string;
}
export class ScheduleManager {
constructor(client: PteroClient);
client: PteroClient;
cache: Dict<number, Schedule>;
_patch(id: number, data: any): Schedule | Dict<number, Schedule>;
panelURLFor(id: string, schedule: string | Schedule): string;
fetch(server: string, id?: string, force?: boolean): Promise<Schedule | Dict<number, Schedule>>;
create(server: string, options: ScheduleCreateOptions): Promise<Schedule>;
update(server: string, id: string, options: Partial<ScheduleCreateOptions>): Promise<Schedule>;
delete(server: string, id: string): Promise<boolean>;
}
export class SubUserManager {
constructor(client: PteroClient, server: ClientServer);
client: PteroClient;
server: ClientServer;
cache: Dict<string, PteroSubUser>;
_patch(data: any): PteroSubUser | Dict<string, PteroSubUser>;
resolve(obj: string | number | object | PteroSubUser): PteroSubUser | undefined;
get panelURL(): string;
fetch(id?: string, force?: boolean): Promise<PteroSubUser | Dict<string, PteroSubUser>>;
add(email: string, permissions: PermissionResolvable): Promise<PteroSubUser>;
setPermissions(uuid: string, permissions: PermissionResolvable): Promise<PteroSubUser>;
remove(id: string): Promise<boolean>;
}
// Extensions
export interface StatusOptions {
domain: string;
auth: string;
nodes: number[];
callInterval: number;
nextInterval?: number;
retryLimit?: number;
}
export interface StatusEvents {
debug: [message: string];
connect: [id: number];
interval: [node: object];
disconnect: [id: number];
}
export class NodeStatus extends EventEmitter implements StatusOptions {
constructor(options: StatusOptions);
options: StatusOptions;
headers: { [key: string]: string };
#interval: NodeJS.Timer | null;
#connected: Set<number>;
domain: string;
auth: string;
nodes: number[];
callInterval: number;
nextInterval: number;
retryLimit: number;
ping: number;
current: number;
readyAt: number;
onConnect: (id: number) => void;
onInterval: (d: object) => void;
onDisconnect: (id: number) => void;
#debug(message: string): void;
connect(): Promise<void>;
#ping(): Promise<void>;
#handleNext(): Promise<void>;
#request(id: number): Promise<void>;
close(message?: string, error?: boolean): void;
emit<E extends keyof StatusEvents>(event: E, ...args: StatusEvents[E]): boolean;
on<E extends keyof StatusEvents>(event: E, listener: (...args: StatusEvents[E]) => any): this;
on<T, E extends keyof StatusEvents>(event: E, listener: (...args: StatusEvents[E]) => T): this;
once<E extends keyof StatusEvents>(event: E, listener: (...args: StatusEvents[E]) => any): this;
once<T, E extends keyof StatusEvents>(events: E, listener: (...args: StatusEvents[E]) => T): this;
off<E extends keyof StatusEvents>(event: E, listener: (...args: StatusEvents[E]) => any): this;
off<T, E extends keyof StatusEvents>(event: E, listener: (...args: StatusEvents[E]) => T): this;
}
// HTTP
export class RequestManager extends EventEmitter {
constructor(type: string, domain: string, auth: string);
type: string;
domain: string;
auth: string;
ping: number;
getHeaders(): { [key: string]: string };
#debug(message: string): void;
_make(path: string, params: object, method?: string): Promise<object | Buffer | void>;
get(path: string): Promise<object | Buffer | void>;
post(path: string, params: object): Promise<object | Buffer | void>;
patch(path: string, params: object): Promise<object | Buffer | void>;
put(path: string, params: object): Promise<object | Buffer | void>;
delete(path: string, params?: object): Promise<object | Buffer | void>;
}
// Structures
export interface UpdateDetailsOptions {
name?: string;
owner?: number | PteroUser;
externalId?: string;
description?: string;
}
export interface UpdateBuildOptions {
allocation?: number;
swap?: number;
memory?: number;
disk?: number;
cpu?: number;
threads?: number | null;
io?: number;
featureLimits?:{
allocations?: number;
backups?: number;
databases?: number;
};
}
export class ApplicationServer {
constructor(client: PteroApp, data: object);
client: PteroApp;
id: number;
uuid: string;
identifier: string;
externalId: string | null;
name: string;
description: string | null;
suspended: boolean;
limits: object;
featureLimits: object;
ownerId: number;
owner: PteroUser | null;
nodeId: number;
node: Node | null;
allocation: number;
nest: number;
egg: number;
container: null;
createdAt: Date;
createdTimestamp: number;
updatedAt: Date | null;
updatedTimestamp: number | null;
_patch(data: any): void;
get panelURL(): string;
get adminURL(): string;
fetchOwner(): Promise<PteroUser>;
updateDetails(options: UpdateDetailsOptions): Promise<this>;
updateBuild(options: UpdateBuildOptions): Promise<this>;
updateStartup(options: object): void;
suspend(): Promise<void>;
unsuspend(): Promise<void>;
reinstall(): Promise<void>;
delete(force?: boolean): Promise<boolean>;
toJSON(): object;
}
export type PowerState =
| 'start'
| 'stop'
| 'restart'
| 'kill';
export class ClientServer {
constructor(client: PteroClient, data: any);
client: PteroClient;
uuid: string;
identifier: string;
isOwner: boolean;
name: string;
node: number;
stfp:{
ip: string;
port: number;
};
description: string | null;
limits: object;
featureLimits: object;
suspended: boolean;
state: string;
installing: boolean;
users: SubUserManager;
allocations: NetworkAllocationManager;
permissions: Permissions;
databases: ClientServerManager;
files: FileManager;
schedules: Dict<string, Schedule>;
_patch(data: any): void;
get panelURL(): string;
addWebsocket(): void;
fetchResouces(): void;
sendCommand(command: string): Promise<void>;
setPowerState(state: PowerState): Promise<void>;
}
export interface DictConstructor {
new(): Dict<any, any>;
new<K, V>(entries?: readonly [K, V][]): Dict<K, V>;
new<K, V>(iterable?: Iterable<readonly [K, V]>): Dict<K, V>;
readonly [Symbol.iterator]: DictConstructor;
readonly [Symbol.species]: DictConstructor;
}
export class Dict<K, V> extends Map<K, V> {
['constructor']: DictConstructor;
has(key: K): boolean;
get(key: K): V | undefined;
set(key: K, value: V): this;
delete(key: K): boolean;
some(fn: (value: V, key: K, dict: this) => boolean): boolean;
every(fn: (value: V, key: K, dict: this) => boolean): boolean;
hasAny(...keys: K[]): boolean;
hasAll(...keys: K[]): boolean;
first(amount?: number): V | V[] | undefined;
last(amount?: number): V | V[] | undefined;
random(amount?: number): V | V[] | undefined;
map<T>(fn: (value: V, key: K, dict: this) => T): T[];
filter(fn: (value: V, key: K, dict: this) => boolean): Dict<K, V>;
filter<k extends K>(fn: (value: V, key: K, dict: this) => key is k): Dict<K, V>;
filter<v extends V>(fn: (value: V, key: K, dict: this) => value is v): Dict<K, v>;
filter<k extends K, v extends V>(fn: (value: V, key: K, dict: this) => k): Dict<k, v>;
filter<k extends K, v extends V>(fn: (value: V, key: K, dict: this) => v): Dict<k, v>;
find(fn: (value: V, key: K, dict: this) => boolean): V | undefined;
find<k extends K>(fn: (value: V, key: K, dict: this) => key is k): V | undefined;
find<v extends V>(fn: (value: V, key: K, dict: this) => value is v): V | undefined;
sweep(fn: (value: V, key: K, dict: this) => boolean): number;
part(fn: (value: V, key: K, dict: this) => boolean): Dict<K, V>[];
part<k extends K>(fn: (value: V, key: K, dict: this) => key is k): Dict<k, V>[];
part<v extends V>(fn: (value: V, key: K, dict: this) => value is v): Dict<K, v>[];
reduce<T>(fn: (value: V, key: K, dict: this) => boolean, acc: T): T;
join(...dicts: Dict<K, V>[]): Dict<K, V>;
difference(dict: Dict<K, V>): Dict<K, V>;
}
export class RequestError extends Error {
constructor(message: string);
}
export class PteroAPIError extends Error {
constructor(data: any);
code: string;
}
export class WebSocketError extends Error {
constructor(message: string);
}
export interface NodeUpdateOptions {
name?: string;
location?: string;
fqdn?: string;
scheme?: string;
memory?: number;
disk?: number;
sftp?:{
port?: number;
listener?: number;
};
upload_size?: number;
memory_overallocate?: number;
disk_overallocate?: number;
}
export class Node {
constructor(client: PteroApp, data: any);
client: PteroApp;
id: number;
uuid: string;
public: boolean;
name: string;
description: string | null;
locationId: number;
location: NodeLocation | undefined;
servers: Dict<number, ApplicationServer> | undefined;
fqdn: string;
scheme: string;
behindProxy: boolean;
maintenance: boolean;
memory: number;
overallocatedMemory: number;
disk: number;
overallocatedDisk: number;
uploadSize: number;
daemon:{
listening: number;
sftp: number;
base: string;
};
createdAt: Date;
updatedAt: Date | null;
_patch(data: any): void;
get adminURL(): string;
getConfig(): Promise<object>;
update(options: NodeUpdateOptions): Promise<Node>;
delete(): Promise<boolean>;
toJSON(): object;
}
export type PermissionFlags = { [key: string]: number };
export type PermissionResolvable =
| string[]
| number[]
| object;
export class Permissions {
static get FLAGS(): Readonly<PermissionFlags>;
static get DEFAULT(): Readonly<PermissionFlags>;
constructor(data: PermissionResolvable);
raw: PermissionFlags;
has(perms: string | number | PermissionResolvable): boolean;
isAdmin(): boolean;
static resolve(perms: PermissionResolvable): PermissionFlags;
serialize(): { [key: string]: boolean };
toArray(): string[];
toStrings(): string[];
static fromStrings(perms: string[]): PermissionFlags;
}
export type ScheduleAction =
| 'command'
| 'power'
| 'backup';
export interface ScheduleTask {
id: number;
sequenceId: number;
action: ScheduleAction;
payload: string;
offset: number;
queued: boolean;
createdAt: Date;
updatedAt: Date | null;
}
export interface ScheduleUpdateOptions {
name?: string;
active?: boolean;
minute?: string;
hour?: boolean;
dayOfWeek?: boolean;
dayOfMonth?: boolean;
}
export class Schedule {
constructor(client: PteroClient, serverId: string, data: any);
client: PteroClient;
serverId: string;
tasks: Dict<number, ScheduleTask>;
id: number;
name: string;
cron:{
week: string;
month: string;
hour: string;
minute: string;
};
active: boolean;
processing: boolean;
lastRunAt: Date | null;
nextRunAt: Date;
createdAt: Date;
updatedAt: Date | null;
_patch(data: any): void;
_resolveTask(data: any): ScheduleTask;
get panelURL(): string;
update(options: ScheduleUpdateOptions): Promise<Schedule>;
createTask(action: string, payload: string, offset: string): Promise<ScheduleTask>;
updateTask(id: number, options:{ action: string; payload: string; offset: string }): Promise<ScheduleTask>;
deleteTask(id: number): Promise<boolean>;
delete(): Promise<boolean>;
}
export class BaseUser {
constructor(client: PteroApp | PteroClient, data: any);
client: PteroApp | PteroClient;
id?: number;
username?: string;
email?: string;
firstname?: string;
lastname?: string;
language?: string;
toString(): string;
toJSON(): object;
}
export class PteroUser extends BaseUser {
constructor(client: PteroApp, data: any);
client: PteroApp;
id: number;
uuid: string;
externalId: string;
username: string;
email: string;
firstname: string;
lastname: string;
langauge: string;
isAdmin: boolean;
tfa: boolean;
twoFactor: boolean;
relationships: Dict<number, ApplicationServer> | undefined;
createdAt: Date;
createdTimestamp: number;
updatedAt: Date | null;
updatedTimestamp: number | null;
get adminURL(): string;
update(options: Partial<UserCreateOptions>): Promise<PteroUser>;
delete(): Promise<boolean>;
}
export class PteroSubUser extends BaseUser {
constructor(client: PteroClient, data: any);
client: PteroClient;
uuid: string;
identifier: string;
_server: string;
image: string;
enabled: boolean;
permissions: Permissions;
createdAt: Date;
createdTimestamp: number;
get panelURL(): string;
setPermissions(perms: PermissionResolvable): Promise<this>;
}
export interface APIKey {
identifier: string;
description: string;
allowedIPs: string[];
lastUsedAt: Date | null;
createdAt: Date;
}
export class ClientUser extends BaseUser {
constructor(client: PteroClient, data: any);
client: PteroClient;
uuid: string;
identifier: string;
image: string;
enabled: boolean;
isAdmin: boolean;
tokens: string[];
apikeys: APIKey[];
get panelURL(): string;
get2faCode(): Promise<string>;
enable2fa(): Promise<string>;
disable2fa(password: string): Promise<void>;
updateEmail(email: string, password: string): Promise<this>;
updatePassword(oldpass: string, newpass: string): Promise<void>;
fetchKeys(): Promise<APIKey[]>;
createKey(description: string, allowed?: string[]): Promise<APIKey>;
deleteKey(id: string): Promise<void>;
}

View File

@@ -1,47 +0,0 @@
module.exports = {
version: require('../package.json').version,
// Application API
ApplicationServerManager: require('./application/ApplicationServerManager'),
NestEggsManager: require('./application/NestEggsManager'),
NestManager: require('./application/NestManager'),
NodeAllocationManager: require('./application/NodeAllocationManager'),
NodeLocationManager: require('./application/NodeLocationManager'),
NodeManager: require('./application/NodeManager'),
PteroApp: require('./application/PteroApp'),
UserManager: require('./application/UserManager'),
// Client API
Shard: require('./client/ws/Shard'),
WebSocketManager: require('./client/ws/WebSocketManager'),
BackupManager: require('./client/BackupManager'),
ClientDatabaseManager: require('./client/ClientDatabaseManager'),
ClientServerManager: require('./client/ClientServerManager'),
FileManager: require('./client/FileManager'),
NetworkAllocationManager: require('./client/NetworkAllocationManager'),
PteroClient: require('./client/PteroClient'),
ScheduleManager: require('./client/ScheduleManager'),
SubUserManager: require('./client/SubUserManager'),
// Extensions
NodeStatus: require('./extensions/NodeStatus'),
// HTTP
RequestManager: require('./http/RequestManager'),
// Structures
ApplicationServer: require('./structures/ApplicationServer'),
ClientServer: require('./structures/ClientServer'),
Dict: require('./structures/Dict'),
...require('./structures/Errors'),
Node: require('./structures/Node'),
Permissions: require('./structures/Permissions'),
Schedule: require('./structures/Schedule'),
...require('./structures/User'),
// Utility
caseConv: require('./util/caseConv'),
configLoader: require('./util/configLoader'),
query: require('./util/query')
};

View File

@@ -1,319 +0,0 @@
const { PteroUser } = require('./User');
const Node = require('./Node');
const caseConv = require('../util/caseConv');
const endpoints = require('../application/endpoints');
class ApplicationServer {
constructor(client, data) {
this.client = client;
/**
* The of the server (separate from UUID).
* @type {number}
*/
this.id = data.id;
/**
* The internal UUID of the server.
* @type {string}
*/
this.uuid = data.uuid;
/**
* A substring of the server's UUID to easily identify it.
* @type {string}
*/
this.identifier = data.identifier;
/**
* The date the server was created.
* @type {Date}
*/
this.createdAt = new Date(data.created_at);
/** @type {number} */
this.createdTimestamp = this.createdAt.getTime();
/**
* The date the server was last updated.
* @type {?Date}
*/
this.updatedAt = data.updated_at ? new Date(data.updated_at) : null;
/** @type {?number} */
this.updatedTimestamp = this.updatedAt?.getTime() || null;
this._patch(data);
}
_patch(data) {
if ('external_id' in data) {
/**
* The external ID of the server (if set).
* @type {?string}
*/
this.externalId = data.external_id ?? null;
}
if ('name' in data) {
/**
* The name of the server.
* @type {string}
*/
this.name = data.name;
}
if ('description' in data) {
/**
* A brief description of the server (if set).
* @type {?string}
*/
this.description = data.description || null;
}
if ('suspended' in data) {
/**
* Whether the server is suspended from action.
* @type {boolean}
*/
this.suspended = data.suspended;
}
if ('limits' in data) {
/**
* An object containing the server's limits.
* @type {object}
*/
this.limits = data.limits;
}
if ('feature_limits' in data) {
/**
* An object containing the server's feature limits.
* @type {object}
*/
this.featureLimits = data.feature_limits;
}
if ('user' in data) {
/**
* The ID of the server owner. Use {@link ApplicationServer.fetchOwner} to return the
* full PteroUser object via {@link ApplicationServer.owner}.
* @type {number}
*/
this.ownerId = data.user;
}
if (!this.owner) {
/**
* The server owner PteroUser object. This can be fetched by including 'user' in
* the ApplicationServerManager.fetch, or via {@link ApplicationServer.fetchOwner}.
* @type {?PteroUser}
*/
this.owner = this.client.users.resolve(data);
}
if ('node' in data) {
/**
* The ID of the node. This is not received by default and must be fetched
* via the client NodeManager.
* @type {number}
*/
this.nodeId = data.node;
}
if (!this.node) {
/**
* The node object that the server is part of. This can be fetched by including
* 'node' in the ApplicationServerManager.fetch.
* @type {?Node}
*/
this.node = this.client.nodes.resolve(data);
}
if ('allocation' in data) {
/**
* The ID of the allocation for this server.
* @type {number}
*/
this.allocation = data.allocation;
}
if ('nest' in data) {
/**
* The ID of the nest this server is part of.
* @type {number}
*/
this.nest = data.nest;
}
if ('egg' in data) {
/**
* The ID of the egg this server uses.
* @type {number}
*/
this.egg = data.egg;
}
if ('-' in data) {
/**
* @todo Implement container manager
*/
this.container = null;
}
}
/**
* Returns a formatted URL to the server.
* @returns {string} The formatted URL.
*/
get panelURL() {
return `${this.client.domain}/server/${this.identifier}`;
}
/**
* Returns a formatted URL to the server in the admin panel.
* @returns {string} The formatted URL.
*/
get adminURL() {
return `${this.client.domain}/admin/servers/view/${this.id}`;
}
/**
* Fetches the PteroUser object of the server owner.
* The user can be accessed via {@link ApplicationServer.owner}.
* @returns {Promise<PteroUser>} The fetched user.
*/
async fetchOwner() {
if (this.owner) return this.owner;
const user = await this.client.users.fetch(this.ownerId, { force: true });
this.owner = user;
return user;
}
/**
* Updates details of the server.
* @param {object} options Update details options.
* @param {string} [options.name] The new name of the server.
* @param {number|PteroUser} [options.owner] The new owner of the server.
* @param {string} [options.externalId] The new external ID of the server.Array
* @param {string} [options.description] The new description of the server.
* @returns {Promise<ApplicationServer>} The updated server instance.
*/
async updateDetails(options = {}) {
if (!Object.keys(options).length) throw new Error('Too few options to update.');
const owner = options.owner instanceof PteroUser ? options.owner.id : options.owner;
const payload = {};
payload.name = options.name ?? this.name;
payload.user = owner ?? this.user;
payload.external_id = options.externalId ?? this.externalId;
payload.description = options.description ?? this.description;
await this.client.requests.patch(
endpoints.servers.details(this.id), payload
);
this._patch(payload);
return this;
}
/**
* Updates the server's build structure.
* @param {object} options Update build options.
* @param {number} [options.allocation] The ID of the allocation for the server.
* @param {number} [options.swap] Server space swap option.
* @param {number} [options.memory] The amount of memory allowed for the server.
* @param {number} [options.disk] The amount of disk allowed for the server.
* @param {number} [options.cpu] The amount of CPU to allow for the server.
* @param {?number} [options.threads] The number of threads for the server.
* @param {number} [options.io]
* @param {object} [options.featureLimits] Feature limits options.
* @param {number} [options.featureLimits.allocations] The server allocations limit.
* @param {number} [options.featureLimits.backups] The server backups limit.
* @param {number} [options.featureLimits.databases] The server databases limit.
* @returns {Promise<ApplicationServer>} The updated server instance.
*/
async updateBuild(options = {}) {
if (!Object.keys(options).length) throw new Error('Too few options to update.');
options.allocation ??= this.allocation;
options.swap ??= this.limits.swap ?? 0;
options.memory ??= this.memory;
options.disk ??= this.disk;
options.cpu ??= this.limits.cpu ?? 0;
options.threads ??= this.limits.threads;
options.io ??= this.limits.io;
options.featureLimits ??= {};
options.featureLimits.allocations ??= this.featureLimits.allocations ?? 0;
options.featureLimits.backups ??= this.featureLimits.backups ?? 0;
options.featureLimits.databases ??= this.featureLimits.databases ?? 0;
// TODO: caseConv update
options.feature_limits = caseConv.snakeCase(options.featureLimits);
await this.client.requests.patch(
endpoints.servers.build(this.id), options
);
this._patch(options);
return this;
}
/**
* Updates the server's startup configuration.
* @param {object} options Startup options.
* @todo
*/
async updateStartup(options = {}) {}
/**
* Suspends the server.
* @returns {Promise<void>}
*/
async suspend() {
await this.client.requests.post(
endpoints.servers.suspend(this.id), null
);
this.suspended = true;
}
/**
* Unsuspends the server.
* @returns {Promise<void>}
*/
async unsuspend() {
await this.client.requests.post(
endpoints.servers.unsuspend(this.id), null
);
this.suspended = false;
}
/**
* Reinstalls the server.
* @returns {Promise<void>}
*/
async reinstall() {
await this.client.requests.post(
endpoints.servers.reinstall(this.id), null
);
}
/**
* Deletes the server (with force option).
* @param {boolean} [force] Whether to force delete the server.
* @returns {Promise<boolean>}
*/
async delete(force = false) {
return this.client.servers.delete(this.id, force);
}
/**
* Returns the JSON value of the server.
* @returns {object} The JSON value.
*/
toJSON() {
return caseConv.snakeCase(this, ['client']);
}
}
module.exports = ApplicationServer;

View File

@@ -1,184 +0,0 @@
const ClientDatabaseManager = require('../client/ClientDatabaseManager');
const FileManager = require('../client/FileManager');
const NetworkAllocationManager = require('../client/NetworkAllocationManager');
const Permissions = require('./Permissions');
const SubUserManager = require('../client/SubUserManager');
const endpoints = require('../client/endpoints');
class ClientServer {
constructor(client, data) {
this.client = client;
const attr = data.attributes;
/**
* The internal UUID of the server.
* @type {string}
*/
this.uuid = attr.uuid;
/**
* A substring of the server's UUID to easily identify it.
* @type {string}
*/
this.identifier = attr.identifier;
/** @type {SubUserManager} */
this.users = new SubUserManager(client, this);
/** @type {NetworkAllocationManager} */
this.allocations = new NetworkAllocationManager(client, this);
/** @type {Permissions} */
this.permissions = new Permissions(data.meta?.user_permissions ?? {});
/** @type {ClientDatabaseManager} */
this.databases = new ClientDatabaseManager(client, this, attr.relationships);
/** @type {FileManager} */
this.files = new FileManager(client, this, attr.relationships);
this._patch(attr);
}
_patch(data) {
if ('server_owner' in data) {
/**
* Whether the client user is the owner of the server.
* @type {boolean}
*/
this.isOwner = data.server_owner;
}
if ('name' in data) {
/**
* The name of the server.
* @type {string}
*/
this.name = data.name;
}
if ('node' in data) {
/**
* The name of the node the server is on.
* @type {string}
*/
this.node = data.node;
}
if ('sftp_details' in data) {
/**
* An object containing SFTP details.
* @type {object}
*/
this.sftp = {
/** @type {string} */
ip: data.sftp_details.ip,
/** @type {number} */
port: data.sftp_details.port
}
}
if ('description' in data) {
/**
* A brief description of the server (if set).
* @type {?string}
*/
this.description = data.description || null;
}
if ('limits' in data) {
/**
* An object containing the server's limits.
* @type {object}
*/
this.limits = data.limits;
}
if ('feature_limits' in data) {
/**
* An object containing the server's feature limits.
* @type {object}
*/
this.featureLimits = data.feature_limits;
}
if ('is_suspended' in data) {
/**
* Whether the server is suspended from action.
* @type {boolean}
*/
this.suspended = data.is_suspended;
}
if ('state' in data) {
/**
* The current power state of the server.
* @type {string}
*/
this.state = data.state || 'unknown';
}
if ('is_installing' in data) {
/**
* Whether the server is currently being installed.
* @type {boolean}
*/
this.installing = data.is_installing;
}
}
/**
* Returns a formatted URL to the server.
* @returns {string} The formatted URL.
*/
get panelURL() {
return `${this.client.domain}/server/${this.identifier}`;
}
/**
* Adds the server to the WebSocket connection list to be established.
* @returns {void}
*/
addWebSocket() {
this.client.addSocketServer(this.identifier);
}
get schedules() {
return this.client.schedules.cache.get(this.identifier);
}
/** @todo */
async fetchResources() {}
/**
* Sends a command to the server terminal.
* @param {string} command The command to send.
* @returns {Promise<void>}
*/
async sendCommand(command) {
await this.client.requests.post(
endpoints.servers.command(this.identifier), { command }
);
}
/**
* Changes the server's power state. This can be one of the following:
* * start
* * stop
* * restart
* * kill
* @param {string} state The power state to set the server to.
* @returns {Promise<void>}
*/
async setPowerState(state) {
if (!['start', 'stop', 'restart', 'kill'].includes(state))
throw new Error('Invalid power state.');
await this.client.requests.post(
endpoints.servers.power(this.identifier), { signal: state }
);
this.state = state;
}
}
module.exports = ClientServer;

View File

@@ -1,189 +0,0 @@
/**
* Dict (or Dictionary) is an extended Map with additional helper methods
* used for manager caches in the PteroJS library.
* @extends {Map}
*/
class Dict extends Map {
has(key) {
return super.has(key);
}
get(key) {
return super.get(key);
}
set(key, value) {
return super.set(key, value);
}
delete(key) {
return super.delete(key);
}
/**
* Checks if at least one of the items in the dict pass the function.
* @param {Function} fn The function to apply to the dict.
* @returns {boolean}
*/
some(fn) {
for (const [k, v] of this) if (fn(v, k, this)) return true;
return false;
}
/**
* Checks if all the items in the dict pass the function.
* @param {Function} fn The function to apply to the dict.
* @returns {boolean}
*/
every(fn) {
for (const [k, v] of this) if (!fn(v, k, this)) return false;
true;
}
/**
* Checks that any of the specified keys exist in the dict.
* @param {...any} keys The keys to check for.
* @returns {boolean}
*/
hasAny(...keys) {
return keys.some(k => super.has(k));
}
/**
* Checks that all of the specified keys exist in the dict.
* @param {...any} keys The keys to check for.
* @returns {boolean}
*/
hasAll(...keys) {
return keys.every(k => super.has(k));
}
/**
* Returns the first item (or items if otherwise specified) in the dict.
* @param {number} [amount] The number of items to return from the start of the dict.
* @returns {any|any[]}
*/
first(amount) {
const v = [...super.values()];
if (amount === undefined) return v[0];
const s = v.splice(0, amount);
return s.length === 1 ? s[0] : s;
}
/**
* Returns the last item (or items if otherwise specified) in the dict.
* @param {number} [amount] The number of items to return from the end of the dict.
* @returns {any|any[]}
*/
last(amount) {
const v = [...super.values()];
if (amount === undefined) return v[v.length-1];
const s = v.slice(-amount);
return s.length === 1 ? s[0] : s;
}
/**
* Returns a random item (or items if otherwise specified) in the dict.
* @param {number} [amount] The number of random items to return.
* @returns {any|any[]}
*/
random(amount) {
const v = [...super.values()];
if (amount === undefined) return v[Math.floor(Math.random() * v.length)];
const s = [];
for (let i=0; i<amount; i++) s.push(v[Math.floor(Math.random() * v.length)]);
return s.length === 1 ? s[0] : s;
}
/**
* Applies the function to each item in the dict and returns an array of the results.
* @param {Function} fn The function to apply to the dict.
* @returns {any[]}
*/
map(fn) {
const res = [];
for (const [k, v] of this) res.push(fn(v, k, this));
return res;
}
/**
* Applies the function to each item in the dict and returns a dict of the results that passed.
* @param {Function} fn The function to apply to the dict.
* @returns {Dict<any, any>}
*/
filter(fn) {
const res = new Dict();
for (const [k, v] of this) if (fn(v, k, this)) res.set(k, v);
return res;
}
/**
* Applies a function to each item in the dict and returns the first one that passes.
* @param {Function} fn The function to apply to the dict.
* @returns {?any}
*/
find(fn) {
for (const [k, v] of this) if (fn(v, k, this)) return v;
return undefined;
}
/**
* Applies a function to each item in the dict and returns the number of items removed.
* @param {Function} fn The function to apply to the dict.
* @returns {number}
*/
sweep(fn) {
let res = 0;
for (const [k, v] of this) if (fn(v, k, this)) super.delete(k) && res++;
return res;
}
/**
* Applies a function to each item in the dict and returns 2 dicts, the first containing
* items that passed the function and the second containing the failed items.
* @param {Function} fn The function to apply to the dict.
* @returns {Dict<any, any>[]}
*/
part(fn) {
const pass = new Dict();
const fail = new Dict();
for (const [k, v] of this) if (fn(v, k, this)) pass.set(k, v); else fail.set(k, v);
return [pass, fail];
}
/**
* Reduces each item in the dict to a single value.
* @param {Function} fn The function to apply to the dict.
* @param {any} acc The object to accumulate.
* @returns {any}
*/
reduce(fn, acc) {
for (const [k, v] of this) acc = fn(acc, v, k, this);
return acc;
}
/**
* Joins one or more dicts with the current one and returns the value.
* @param {...Dict<any, any>} dict The dicts to join.
* @returns {Dict<any, any>}
*/
join(...dict) {
const res = new Dict(this);
for (const d of dict) for (const [k, v] of d) res.set(k, v);
return res;
}
/**
* Returns a dict containing the different items between both dicts.
* @param {Dict<any, any>} dict The dict to compare differences to.
* @returns {Dict<any, any>}
*/
difference(dict) {
const res = new Dict();
for (const [k, v] of this) if (!dict.has(k)) res.set(k, v);
for (const [k, v] of dict) if (!super.has(k)) res.set(k, v);
return res;
}
}
module.exports = Dict;

View File

@@ -1,18 +0,0 @@
exports.RequestError = class RequestError extends Error {
constructor(message) { super(message) };
}
exports.PteroAPIError = class PteroAPIError extends Error {
constructor(data) {
const fmt = data.errors.map(
e => `- ${e.status}: ${e.detail || 'No details provided'}`
).join('\n');
super('\n'+ fmt);
this.code = data.errors[0].code;
}
}
exports.WebSocketError = class WebSocketError extends Error {
constructor(message) { super(message) };
}

View File

@@ -1,231 +0,0 @@
const { NodeLocation } = require('../application/NodeLocationManager');
const caseConv = require('../util/caseConv');
const endpoints = require('../application/endpoints');
class Node {
constructor(client, data) {
this.client = client;
data = data.attributes;
/**
* The ID of the node.
* @type {number}
*/
this.id = data.id;
/**
* The internal UUID of the node.
* @type {string}
*/
this.uuid = data.uuid;
/**
* The date the node was created.
* @type {Date}
*/
this.createdAt = new Date(data.created_at);
/**
* The date the node was last updated.
* @type {?Date}
*/
this.updatedAt = data.updated_at ? new Date(data.updated_at) : null;
this._patch(data);
}
_patch(data) {
if ('public' in data) {
/**
* Whether the node is public to other users.
* @type {boolean}
*/
this.public = data.public;
}
if ('name' in data) {
/**
* The name of the node.
* @type {string}
*/
this.name = data.name;
}
if ('description' in data) {
/**
* A brief description of the node (if set).
* @type {?string}
*/
this.description = data.description || null;
}
if ('location_id' in data) {
/**
* The ID of the node location.
* @type {number}
*/
this.locationId = data.location_id;
}
if (!this.location) {
/**
* The location of the node.
* @type {?NodeLocation}
*/
this.location = this.client.locations.resolve(data);
}
if (!this.servers) {
/**
* A map of servers the node currently contains.
*/
this.servers = this.client.servers.resolve(data);
}
if ('fqdn' in data) {
/**
* The FQDN for the node.
* @type {string}
*/
this.fqdn = data.fqdn;
}
if ('scheme' in data) {
/**
* The HTTP scheme for the node.
* @type {string}
*/
this.scheme = data.scheme;
}
if ('behind_proxy' in data) {
/**
* Whether the node is behind a proxy.
* @type {boolean}
*/
this.behindProxy = data.behind_proxy;
}
if ('maintenance_mode' in data) {
/**
* Whether the node is in maintenance mode.
* @type {boolean}
*/
this.maintenance = data.maintenance_mode;
}
if ('memory' in data) {
/**
* The amount of memory the node has.
* @type {number}
*/
this.memory = data.memory;
}
if ('memory_overallocate' in data) {
/**
* The amount of memory the node has overallocated.
* @type {number}
*/
this.overallocatedMemory = data.memory_overallocate;
}
if ('disk_overallocate' in data) {
/**
* The amount of disk the node has overallocated.
* @type {number}
*/
this.overallocatedDisk = data.disk_overallocate;
}
if ('upload_size' in data) {
/**
* The maximum upload size for the node.
* @type {number}
*/
this.uploadSize = data.upload_size;
}
if ('daeon_listen' in data) {
/**
* An object containing Pterodactyl Daemon details.
* @type {object}
*/
this.daemon = {
/**
* @type {number}
*/
listening: data.daemon_listen,
/**
* @type {number}
*/
sftp: data.daemon_sftp,
/**
* @type {string}
*/
base: data.daemon_base
}
}
}
/**
* Returns a formatted URL to the node in the admin panel.
* @returns {string} The formatted URL.
*/
get adminURL() {
return `${this.client.domain}/admin/nodes/view/${this.id}`;
}
/**
* Returns the node's config (untyped).
* @returns {Promise<object>} The node config.
*/
async getConfig() {
return await this.client.requests.get(
endpoints.nodes.config(this.id)
);
}
/**
* Updates the node with the specified options.
* @param {object} options Node update options.
* @param {string} [options.name] The name of the node.
* @param {number} [options.location] The ID of the location for the node.
* @param {string} [options.fqdn] The FQDN for the node.
* @param {string} [options.scheme] The HTTP/HTTPS scheme for the node.
* @param {number} [options.memory] The amount of memory for the node.
* @param {number} [options.disk] The amount of disk for the node.
* @param {object} [options.sftp] SFTP options.
* @param {number} [options.sftp.port] The port for the SFPT.
* @param {number} [options.sftp.listener] The listener port for the SFPT.
* @param {number} [options.upload_size] The maximum upload size for the node.
* @param {number} [options.memory_overallocate] The amount of memory over allocation.
* @param {number} [options.disk_overallocate] The amount of disk over allocation.
* @returns {Promise<Node>} The updated node instance.
*/
async update(options = {}) {
return this.client.nodes.update(this, options);
}
/**
* Deletes the node from Pterodactyl.
* **WARNING:** This is an irreversable action and requires all servers to be removed
* from the node before deleting.
* @returns {Promise<boolean>}
*/
async delete() {
return this.client.nodes.delete(this.id);
}
/**
* Returns the JSON value of the Node.
* @returns {object} The JSON value.
*/
toJSON() {
return caseConv.snakeCase(this, ['client']);
}
}
module.exports = Node;

View File

@@ -1,188 +0,0 @@
const FLAGS = {
WEBSOCKET_CONNECT: 0,
CONTROL_CONSOLE: 1,
CONTROL_START: 2,
CONTROL_STOP: 3,
CONTROL_RESTART: 4,
USER_CREATE: 5,
USER_READ: 6,
USER_UPDATE: 7,
USER_DELETE: 8,
FILE_CREATE: 9,
FILE_READ: 10,
'FILE_READ-CONTENT': 11,
FILE_UPDATE: 12,
FILE_DELETE: 13,
FILE_ARCHIVE: 14,
FILE_SFTP: 15,
BACKUP_CREATE: 16,
BACKUP_READ: 17,
BACKUP_UPDATE: 18,
BACKUP_DELETE: 19,
ALLOCATION_READ: 20,
ALLOCATION_CREATE: 21,
ALLOCATION_UPDATE: 22,
ALLOCATION_DELETE: 23,
STARTUP_READ: 24,
STARTUP_UPDATE: 25,
DATABASE_CREATE: 26,
DATABASE_READ: 27,
DATABASE_UPDATE: 28,
DATABASE_DELETE: 29,
DATABASE_VIEW_PASSWORD: 30,
SCHEDULE_CREATE: 31,
SCHEDULE_READ: 32,
SCHEDULE_UPDATE: 33,
SCHEDULE_DELETE: 34,
SETTINGS_RENAME: 35,
SETTINGS_REINSTALL: 36,
ADMIN_WEBSOCKET_ERRORS: 40,
ADMIN_WEBSOCKET_INSTALL: 41,
ADMIN_WEBSOCKET_TRANSFER: 42
}
class Permissions {
/**
* An object containing all Pterodactyl permissions.
*/
static get FLAGS() {
return Object.freeze(FLAGS);
}
/**
* Default Pterodactyl user permissions.
*/
static get DEFAULT() {
return Object.freeze({
CONTROL_CONSOLE: 1,
CONTROL_START: 2,
CONTROL_STOP: 3,
CONTROL_RESTART: 4
});
}
/**
* @param {PermissionResolvable} data The data to resolve permissions from.
*/
constructor(data) {
/**
* The raw permissions object.
* @type {object}
*/
this.raw = Permissions.resolve(data);
}
/**
* Returns a boolean on whether the specified permissions are currently available.
* This function uses AND logic for checking more than one permission.
* @param {string|number|PermissionResolvable} perms The permissions to check for.
* @returns {boolean}
*/
has(perms) {
if (typeof perms === 'string' || typeof perms === 'number') perms = [perms];
perms = Object.keys(Permissions.resolve(perms));
for (const p of perms) if (!this.raw[p]) return false;
return true;
}
/**
* Returns a boolean on whether the current permissions are administrative.
* @returns {boolean}
*/
isAdmin() {
return this.toArray().some(p => p.includes('ADMIN'));
}
/**
* Resolves a permissions object from a specified source.
* @see {@link PermissionResolvable}
* @param {PermissionResolvable} perms The data to resolve the permissions from.
* @returns {object} The resolved permissions.
*/
static resolve(perms) {
const res = {};
if (typeof perms === 'object' && !Array.isArray(perms)) perms = Object.keys(perms);
if (!perms || !perms?.length) return {};
if (diff(perms)) throw new TypeError('Permissions must be all strings or all numbers.');
if (typeof perms[0] === 'string') perms = Object.keys(this.fromStrings(perms));
const entries = Object.entries(this.FLAGS);
for (const p of perms) {
if (
this.FLAGS[p] === undefined &&
!entries.find(e => e[1] === p)
) throw new Error(`Unknown permission '${p}'.`);
const e = entries.find(e => e.includes(p));
res[e[0]] = e[1];
}
return res;
}
/**
* Returns an object with all the permissions having `true` or `false` values
* if they are currently present.
* @returns {object} The serialized permissions.
*/
serialize() {
const res = {};
Object.keys(Permissions.FLAGS).forEach(f => res[f] = this.has(f));
return res;
}
/**
* Returns an array of the current permissions.
* @returns {string[]} The permissions array.
*/
toArray() {
return Object.keys(this.raw);
}
/**
* Returns an array of the current permissions in string form.
* @returns {string[]} The permission strings array.
*/
toStrings() {
return this.toArray().map(p => p.toLowerCase().replace(/_/g, '.'));
}
/**
* Returns a permission object from the default string permissions.
* @param {string[]} perms The array of default permissions.
* @returns {object} The resolved permissions.
*/
static fromStrings(perms) {
const res = {};
if (perms.includes('*')) return Object.assign({}, Permissions.FLAGS);
for (let p of perms) {
p = p.toUpperCase().replace(/\./g, '_');
if (Permissions.FLAGS[p] === undefined) throw new Error(`Unknown permission '${p}'.`);
res[p] = Permissions.FLAGS[p];
}
return res;
}
}
module.exports = Permissions;
function diff(perms) {
return perms.some(p => typeof p === 'string') && perms.some(p => typeof p === 'number');
}
/**
* Data that can be resolved into a Permissions object. Valid types are:
* * An array of strings
* * An array of numbers
* * An object
* @typedef {string[]|number[]|object} PermissionResolvable
*/

View File

@@ -1,220 +0,0 @@
const Dict = require('./Dict');
const endpoints = require('../client/endpoints');
class Schedule {
constructor(client, serverId, data) {
this.client = client;
this.serverId = serverId;
/** @type {Dict<number, ScheduleTask>} */
this.tasks = new Dict();
data = data.attributes;
/**
* The ID of the schedule.
* @type {number}
*/
this.id = data.id;
/**
* The date the schedule was created.
* @type {Date}
*/
this.createdAt = new Date(data.created_at);
this._patch(data);
}
_patch(data) {
if ('name' in data) {
/**
* The name of the schedule.
* @type {string}
*/
this.name = data.name;
}
if ('cron' in data) {
/**
* An object containing cronjob details.
* @type {object}
*/
this.cron = {
/** @type {string} */
week: data.cron.day_of_week,
/** @type {string} */
month: data.cron.day_of_month,
/** @type {string} */
hour: data.cron.hour,
/** @type {string} */
minute: data.cron.minute
}
}
if ('is_active' in data) {
/**
* Whether the schedule is active.
* @type {boolean}
*/
this.active = data.is_active;
}
if ('is_processing' in data) {
/**
* Whether the schedule is currently processing tasks.
* @type {boolean}
*/
this.processing = data.is_processing;
}
if ('last_run_at' in data) {
/**
* The last recorded date the schedule was ran at.
* @type {?Date}
*/
this.lastRunAt = data.last_run_at ? new Date(data.last_run_at) : null;
}
if ('next_run_at' in data) {
/**
* The date of the next scheduled run.
* @type {Date}
*/
this.nextRunAt = new Date(data.next_run_at);
}
if ('updated_at' in data) {
/**
* The date the schedule was last updated.
* @type {?Date}
*/
this.updatedAt = data.updated_at ? new Date(data.updated_at) : null;
}
if ('relationships' in data) {
for (const obj of data.relationships.tasks.data) {
this._resolveTask(obj);
}
}
}
_resolveTask(data) {
if (data.attributes) data = data.attributes;
const obj = {
id: data.id,
sequenceId: data.sequence_id,
action: data.action,
payload: data.payload,
offset: data.time_offset,
queued: data.is_queued,
createdAt: new Date(data.created_at),
updatedAt: data.updated_at ? new Date(data.updated_at) : null
}
this.tasks.set(obj.id, obj);
return obj;
}
/**
* Returns a formatted URL to the schedule.
* @returns {string} The formatted URL.
*/
get panelURL() {
return `${this.client.domain}/server/${this.serverId}/schedules/${this.id}`;
}
/**
* Updates the schedule.
* @param {object} options Schedule update options.
* @param {string} [options.name] The name of the schedule.
* @param {boolean} [options.active] Whether the schedule is active.
* @param {string} [options.minute] The minute interval (in cron syntax).
* @param {string} [options.hour] The hour interval (in cron syntax).
* @param {string} [options.dayOfWeek] The day of the week interval (in cron syntax).
* @param {string} [options.dayOfMonth] The day of the month interval (in cron syntax).
* @returns {Promise<Schedule>} The updated Schedule instance.
*/
async update(options = {}) {
return this.client.schedules.update(this.serverId, this.id, options);
}
/**
* Creates a new task for the schedule.
* @param {string} action The type of action that will be executed.
* @param {string} payload The payload to invoke the task with.
* @param {string} offset The task offest (in seconds).
* @returns {Promise<ScheduleTask>} The new schedule task.
*/
async createTask(action, payload, offset) {
if (!['command', 'power', 'backup'].includes(action))
throw new TypeError('Invalid task action type.');
const data = await this.client.requests.post(
endpoints.servers.schedules.tasks.main(this.serverId, this.id),
{ action, payload, time_offset: offset }
);
return this._resolveTask(data);
}
/**
* Updates an existing task for the schedule.
* @param {number} id The ID of the schedule task.
* @param {object} options Schedule task edit options.
* @param {string} options.action The type of action that will be executed.
* @param {string} options.payload The payload to invoke the task with.
* @param {string} options.offset The task offest (in seconds).
* @returns {Promise<ScheduleTask>} The updated schedule task.
*/
async updateTask(id, options = {}) {
if (Object.keys(options).length < 3)
throw new Error('Missing required ScheduleTask update options.');
if (!['command', 'power', 'backup'].includes(options.action))
throw new TypeError('Invalid task action type.');
options.time_offset = options.offset;
const data = await this.client.requests.post(
endpoints.servers.schedules.tasks.get(this.serverId, this.id, id),
options
);
return this._resolveTask(data);
}
/**
* Deletes a specified task from the schedule.
* @param {number} id The ID of the schedule task.
* @returns {Promise<boolean>}
*/
async deleteTask(id) {
await this.client.requests.delete(
endpoints.servers.schedules.tasks.get(this.serverId, this.id, id)
);
this.tasks.delete(id);
return true;
}
/**
* Deletes the schedule from the server.
* @returns {Promise<boolean>}
*/
async delete() {
return this.client.schedules.delete(this.serverId, this.id);
}
}
module.exports = Schedule;
/**
* Represents a schedule task.
* @typedef {object} ScheduleTask
* @property {number} id The ID of the task.
* @property {number} sequenceId The ID of the current sequence.
* @property {string} action The action for this task.
* @property {string} payload
* @property {number} offset
* @property {boolean} queued Whether the task is queued in the schedule.
* @property {Date} createdAt The date the task was created.
* @property {?Date} updatedAt The date the task was last updated.
*/

View File

@@ -1,395 +0,0 @@
const ApplicationServer = require('./ApplicationServer');
const Permissions = require('./Permissions');
const { PermissionResolvable } = require('./Permissions');
const Dict = require('./Dict');
const caseConv = require('../util/caseConv');
const c_path = require('../client/endpoints');
let loggedDeprecated = false;
class BaseUser {
constructor(client, data) {
this.client = client;
this._patch(data);
}
_patch(data) {
if ('id' in data) {
/** @type {number} */
this.id = data.id;
}
if ('username' in data) {
/** @type {string} */
this.username = data.username;
}
if ('email' in data) {
/** @type {string} */
this.email = data.email;
}
if ('first_name' in data) {
/** @type {string} */
this.firstname = data.first_name;
}
if ('last_name' in data) {
/** @type {string} */
this.lastname = data.last_name;
}
if ('language' in data) {
/** @type {string} */
this.language = data.language;
}
}
/**
* Returns the string value of the user.
* @returns {string} The fullname.
*/
toString() {
return this.firstname +' '+ this.lastname;
}
/**
* Returns the JSON value of the User.
* @returns {object} The JSON value.
*/
toJSON() {
return caseConv.snakeCase(this, ['client']);
}
}
class PteroUser extends BaseUser {
constructor(client, data) {
super(client, data);
/** @type {string} */
this.uuid = data.uuid;
this._patch(data);
}
_patch(data) {
super._patch(data);
if ('external_id' in data) {
/** @type {string} */
this.externalId = data.external_id;
}
if ('root_admin' in data) {
/** @type {boolean} */
this.isAdmin = data.root_admin ?? false;
}
if ('2fa' in data) {
/**
* @type {boolean}
* @deprecated Use {@link PteroUser.twoFactor} instead.
*/
this.tfa = data['2fa'];
/** @type {boolean} */
this.twoFactor = data['2fa'];
if (!loggedDeprecated) {
process.emitWarning(
"'PteroUser#tfa' is deprecated, use 'PteroUser#twoFactor' instead",
'Deprecated'
);
loggedDeprecated = true;
}
}
if ('created_at' in data) {
/** @type {Date} */
this.createdAt = new Date(data.created_at);
/** @type {number} */
this.createdTimestamp = this.createdAt.getTime();
}
if ('updated_at' in data) {
/** @type {?Date} */
this.updatedAt = data['updated_at'] ? new Date(data['updated_at']) : null;
/** @type {?number} */
this.updatedTimestamp = this.updatedAt?.getTime() || null;
}
if (!this.relationships) {
/**
* A map of servers the user is connected to.
* @type {?Dict<number, ApplicationServer>}
*/
this.relationships = this.client.servers.resolve(data);
}
}
/**
* Returns a formatted URL to the user in the admin panel.
* @returns {string} The formatted URL.
*/
get adminURL() {
return `${this.client.domain}/admin/users/view/${this.id}`;
}
/**
* Updates the specified user's account.
* @param {number|PteroUser} user The user to update.
* @param {object} options Changes to update the user with.
* @param {string} [options.email] The new email for the account.
* @param {string} [options.username] The new username for the account.
* @param {string} [options.firstname] The new firstname for the account.
* @param {string} [options.lastname] The new lastname for the account.
* @param {string} [options.language] The new language for the account.
* @param {string} options.password The password for the user account.
* @returns {Promise<PteroUser>} The updated user instance.
*/
async update(options = {}) {
return this.client.users.update(this, options);
}
/**
* Deletes the user account from Pterodactyl.
* @returns {Promise<boolean>}
*/
async delete() {
return this.client.users.delete(this);
}
}
class PteroSubUser extends BaseUser {
constructor(client, server, data) {
super(client, data);
/** @type {string} */
this.uuid = data.uuid;
/** @type {string} */
this._server = server;
/** @type {Date} */
this.createdAt = new Date(data.created_at);
/** @type {number} */
this.createdTimestamp = this.createdAt.getTime();
/** @type {Permissions} */
this.permissions = new Permissions(data.permissions ?? {});
this._patch(data);
}
_patch(data) {
super._patch(data);
if ('image' in data) {
/** @type {string} */
this.image = data.image;
}
if ('2fa_enabled' in data) {
/** @type {boolean} */
this.enabled = data['2fa_enabled'];
}
}
/**
* Returns a formatted URL to the subuser.
* @returns {string} The formatted URL.
*/
get panelURL() {
return `${this.client.domain}/server/${this._server}/users`;
}
/**
* Updates the subuser's server permissions.
* @param {PermissionResolvable} perms The permissions to set.
* @returns {Promise<PteroSubUser>} The updated user instance.
*/
async setPermissions(perms) {
perms = new Permissions(perms);
await this.client.requests.post(
c_path.servers.users.get(this._server, this.uuid),
{ permissions: perms.toStrings() }
);
this.permissions = perms;
return this;
}
}
class ClientUser extends BaseUser {
constructor(client, data) {
super(client, data);
super._patch(data);
/** @type {boolean} */
this.isAdmin = data.admin;
/**
* An array of 2FA authentication tokens.
* @type {string[]}
*/
this.tokens = [];
/**
* An array of API keys for Pterodactyl.
* @type {APIKey[]}
*/
this.apikeys = [];
}
/**
* Returns a formatted URL to the client account.
* @returns {string} The formatted URL.
*/
get panelURL() {
return `${this.client.domain}/account`;
}
/**
* Fetches a 2FA code linked to the client's account.
* @returns {Promise<string>} The 2FA code.
*/
async get2faCode() {
const data = await this.client.requests.make(c_path.account.tfa);
return data.data.image_url_data;
}
/**
* Enables 2FA for the client user and returns an array of authentication tokens.
* @param {string} code The 2FA code to authenticate with.
* @returns {Promise<string[]>} The auth tokens.
*/
async enable2fa(code) {
const data = await this.client.requests.post(
c_path.account.tfa, { code }
);
this.tokens.push(...data.attributes.tokens);
return this.tokens;
}
/**
* Disables 2FA for the client user.
* @param {string} password The client user's account password.
* @returns {Promise<void>}
*/
async disable2fa(password) {
await this.client.requests.delete(
c_path.account.tfa, { password }
);
this.tokens = [];
}
/**
* Updates the client user's email.
* @param {string} email The new email.
* @param {string} password The client user's password.
* @returns {Promise<ClientUser>} The updated client user instance.
*/
async updateEmail(email, password) {
await this.client.requests.put(
c_path.account.email, { email, password }
);
this.email = email;
return this;
}
/**
* Updates the client user's password. **Note:** the PteroJS library does not store
* passwords on the client user object.
* @param {string} oldpass The current account password.
* @param {string} newpass The new account password.
* @returns {Promise<void>}
*/
async updatePassword(oldpass, newpass) {
if (oldpass === newpass) return Promise.resolve();
return await this.client.requests.put(
c_path.account.password,
{
current_password: oldpass,
password: newpass,
password_confirmation: newpass
}
);
}
/**
* Returns an array of API keys linked to the client user's account.
* @returns {Promise<APIKey[]>} An array of APIKey objects.
*/
async fetchKeys() {
const data = await this.client.requests.make(c_path.account.apikeys);
this.apikeys = [];
for (let o of data.data) {
o = o.attributes;
this.apikeys.push({
identifier: o.identifier,
description: o.description,
allowedIPs: o.allowed_ips,
lastUsedAt: o.last_used_at ? new Date(o.last_used_at) : null,
createdAt: new Date(o.created_at)
});
}
return this.apikeys;
}
/**
* Creates a new API key linked to the client user's account.
* @param {string} description A brief description of the use of the API key.
* @param {string[]} [allowed] An array of whitelisted IPs for the key.
* @returns {Promise<APIKey>} The new API key.
*/
async createKey(description, allowed = []) {
const data = await this.client.requests.post(
c_path.account.apikeys,
{ description, allowed_ips: allowed }
);
const att = data.attributes;
this.apikeys.push({
identifier: att.identifier,
description: att.description,
allowedIPs: att.allowed_ips,
lastUsedAt: att.last_used_at ? new Date(att.last_used_at) : null,
createdAt: new Date(att.created_at)
});
return this.apikeys.find(k => k.identifier === att.identifier);
}
/**
* Deletes the specified API key linked to the client user's account.
* @param {string} id The identifier of the API key to delete.
* @returns {Promise<void>}
*/
async deleteKey(id) {
await this.client.requests.delete(
c_path.account.apikeys +`/${id}`
);
this.apikeys = this.apikeys.filter(k => k.identifier !== id);
}
}
module.exports = {
BaseUser,
PteroUser,
PteroSubUser,
ClientUser
}
/**
* Represents a Pterodactyl API key.
* @typedef {object} APIKey
* @property {string} identifier The identifier of the API key.
* @property {string} description The description of the API key, usually for usage.
* @property {string[]} allowedIPs An array of IPs allowed to use this API key.
* @property {?Date} lastUsedAt The last recorded date of usage.
* @property {Date} createdAt The date the API key was created.
*/

View File

@@ -1,24 +0,0 @@
const assert = require('assert');
const { PteroClient } = require('../src');
const { api_url, api_key } = require('./auth.json');
const app = new PteroClient(api_url, api_key);
assert.doesNotThrow(
(async () => await app.connect()),
'could not connect to api'
);
assert.doesNotThrow(
(async () => await app.fetchClient()),
'could not fetch users endpoint'
);
assert.ok(app.user, 'user not fetched');
assert.doesNotThrow(
(async () => await app.servers.fetch()),
'could not fetch servers endpoint'
);
app.disconnect();

View File

@@ -1,13 +0,0 @@
(async () => {
console.log('Running all tests...');
const { readdirSync } = require('fs');
let count = 0;
for (const mod of readdirSync(__dirname)) {
if (!mod.endsWith('.test.js')) continue;
console.log(`\nRunning Test #${count++}\n`+ '='.repeat(20));
require(`${__dirname}/${mod}`);
}
console.log('Completed all tests.');
})();

View File

@@ -1,31 +0,0 @@
const assert = require('assert');
const { NodeStatus } = require('../src');
const { api_url, app_key } = require('./auth.json');
const status = new NodeStatus({
domain: api_url,
auth: app_key,
nodes:[1],
callInterval: 30_000
});
status.on('connect', id => console.log(`connected to node ${id}`));
status.on('disconnect', id => console.log(`disconnected from node ${id}`));
status.on('interval', node => {
assert.ok(node, 'invalid node payload received');
assert.ok(Object.entries(node).length, 'empty node payload received');
console.log(`
Node Info
ID: ${node.id}
Name: ${node.name}
Memory: ${node.memory}
Disk: ${node.disk}
`);
status.close();
});
assert.doesNotThrow(
(async () => await status.connect()),
'could not connect node status to api'
);

View File

@@ -1,504 +0,0 @@
# node-pre-gyp changelog
## 1.0.9
- Upgraded node-fetch to 2.6.7 to address [CVE-2022-0235](https://www.cve.org/CVERecord?id=CVE-2022-0235)
- Upgraded detect-libc to 2.0.0 to use non-blocking NodeJS(>=12) Report API
## 1.0.8
- Downgraded npmlog to maintain node v10 and v8 support (https://github.com/mapbox/node-pre-gyp/pull/624)
## 1.0.7
- Upgraded nyc and npmlog to address https://github.com/advisories/GHSA-93q8-gq69-wqmw
## 1.0.6
- Added node v17 to the internal node releases listing
- Upgraded various dependencies declared in package.json to latest major versions (node-fetch from 2.6.1 to 2.6.5, npmlog from 4.1.2 to 5.01, semver from 7.3.4 to 7.3.5, and tar from 6.1.0 to 6.1.11)
- Fixed bug in `staging_host` parameter (https://github.com/mapbox/node-pre-gyp/pull/590)
## 1.0.5
- Fix circular reference warning with node >= v14
## 1.0.4
- Added node v16 to the internal node releases listing
## 1.0.3
- Improved support configuring s3 uploads (solves https://github.com/mapbox/node-pre-gyp/issues/571)
- New options added in https://github.com/mapbox/node-pre-gyp/pull/576: 'bucket', 'region', and `s3ForcePathStyle`
## 1.0.2
- Fixed regression in proxy support (https://github.com/mapbox/node-pre-gyp/issues/572)
## 1.0.1
- Switched from mkdirp@1.0.4 to make-dir@3.1.0 to avoid this bug: https://github.com/isaacs/node-mkdirp/issues/31
## 1.0.0
- Module is now name-spaced at `@mapbox/node-pre-gyp` and the original `node-pre-gyp` is deprecated.
- New: support for staging and production s3 targets (see README.md)
- BREAKING: no longer supporting `node_pre_gyp_accessKeyId` & `node_pre_gyp_secretAccessKey`, use `AWS_ACCESS_KEY_ID` & `AWS_SECRET_ACCESS_KEY` instead to authenticate against s3 for `info`, `publish`, and `unpublish` commands.
- Dropped node v6 support, added node v14 support
- Switched tests to use mapbox-owned bucket for testing
- Added coverage tracking and linting with eslint
- Added back support for symlinks inside the tarball
- Upgraded all test apps to N-API/node-addon-api
- New: support for staging and production s3 targets (see README.md)
- Added `node_pre_gyp_s3_host` env var which has priority over the `--s3_host` option or default.
- Replaced needle with node-fetch
- Added proxy support for node-fetch
- Upgraded to mkdirp@1.x
## 0.17.0
- Got travis + appveyor green again
- Added support for more node versions
## 0.16.0
- Added Node 15 support in the local database (https://github.com/mapbox/node-pre-gyp/pull/520)
## 0.15.0
- Bump dependency on `mkdirp` from `^0.5.1` to `^0.5.3` (https://github.com/mapbox/node-pre-gyp/pull/492)
- Bump dependency on `needle` from `^2.2.1` to `^2.5.0` (https://github.com/mapbox/node-pre-gyp/pull/502)
- Added Node 14 support in the local database (https://github.com/mapbox/node-pre-gyp/pull/501)
## 0.14.0
- Defer modules requires in napi.js (https://github.com/mapbox/node-pre-gyp/pull/434)
- Bump dependency on `tar` from `^4` to `^4.4.2` (https://github.com/mapbox/node-pre-gyp/pull/454)
- Support extracting compiled binary from local offline mirror (https://github.com/mapbox/node-pre-gyp/pull/459)
- Added Node 13 support in the local database (https://github.com/mapbox/node-pre-gyp/pull/483)
## 0.13.0
- Added Node 12 support in the local database (https://github.com/mapbox/node-pre-gyp/pull/449)
## 0.12.0
- Fixed double-build problem with node v10 (https://github.com/mapbox/node-pre-gyp/pull/428)
- Added node 11 support in the local database (https://github.com/mapbox/node-pre-gyp/pull/422)
## 0.11.0
- Fixed double-install problem with node v10
- Significant N-API improvements (https://github.com/mapbox/node-pre-gyp/pull/405)
## 0.10.3
- Now will use `request` over `needle` if request is installed. By default `needle` is used for `https`. This should unbreak proxy support that regressed in v0.9.0
## 0.10.2
- Fixed rc/deep-extent security vulnerability
- Fixed broken reinstall script do to incorrectly named get_best_napi_version
## 0.10.1
- Fix needle error event (@medns)
## 0.10.0
- Allow for a single-level module path when packing @allenluce (https://github.com/mapbox/node-pre-gyp/pull/371)
- Log warnings instead of errors when falling back @xzyfer (https://github.com/mapbox/node-pre-gyp/pull/366)
- Add Node.js v10 support to tests (https://github.com/mapbox/node-pre-gyp/pull/372)
- Remove retire.js from CI (https://github.com/mapbox/node-pre-gyp/pull/372)
- Remove support for Node.js v4 due to [EOL on April 30th, 2018](https://github.com/nodejs/Release/blob/7dd52354049cae99eed0e9fe01345b0722a86fde/schedule.json#L14)
- Update appveyor tests to install default NPM version instead of NPM v2.x for all Windows builds (https://github.com/mapbox/node-pre-gyp/pull/375)
## 0.9.1
- Fixed regression (in v0.9.0) with support for http redirects @allenluce (https://github.com/mapbox/node-pre-gyp/pull/361)
## 0.9.0
- Switched from using `request` to `needle` to reduce size of module deps (https://github.com/mapbox/node-pre-gyp/pull/350)
## 0.8.0
- N-API support (@inspiredware)
## 0.7.1
- Upgraded to tar v4.x
## 0.7.0
- Updated request and hawk (#347)
- Dropped node v0.10.x support
## 0.6.40
- Improved error reporting if an install fails
## 0.6.39
- Support for node v9
- Support for versioning on `{libc}` to allow binaries to work on non-glic linux systems like alpine linux
## 0.6.38
- Maintaining compatibility (for v0.6.x series) with node v0.10.x
## 0.6.37
- Solved one part of #276: now now deduce the node ABI from the major version for node >= 2 even when not stored in the abi_crosswalk.json
- Fixed docs to avoid mentioning the deprecated and dangerous `prepublish` in package.json (#291)
- Add new node versions to crosswalk
- Ported tests to use tape instead of mocha
- Got appveyor tests passing by downgrading npm and node-gyp
## 0.6.36
- Removed the running of `testbinary` during install. Because this was regressed for so long, it is too dangerous to re-enable by default. Developers needing validation can call `node-pre-gyp testbinary` directory.
- Fixed regression in v0.6.35 for electron installs (now skipping binary validation which is not yet supported for electron)
## 0.6.35
- No longer recommending `npm ls` in `prepublish` (#291)
- Fixed testbinary command (#283) @szdavid92
## 0.6.34
- Added new node versions to crosswalk, including v8
- Upgraded deps to latest versions, started using `^` instead of `~` for all deps.
## 0.6.33
- Improved support for yarn
## 0.6.32
- Honor npm configuration for CA bundles (@heikkipora)
- Add node-pre-gyp and npm versions to user agent (@addaleax)
- Updated various deps
- Add known node version for v7.x
## 0.6.31
- Updated various deps
## 0.6.30
- Update to npmlog@4.x and semver@5.3.x
- Add known node version for v6.5.0
## 0.6.29
- Add known node versions for v0.10.45, v0.12.14, v4.4.4, v5.11.1, and v6.1.0
## 0.6.28
- Now more verbose when remote binaries are not available. This is needed since npm is increasingly more quiet by default
and users need to know why builds are falling back to source compiles that might then error out.
## 0.6.27
- Add known node version for node v6
- Stopped bundling dependencies
- Documented method for module authors to avoid bundling node-pre-gyp
- See https://github.com/mapbox/node-pre-gyp/tree/master#configuring for details
## 0.6.26
- Skip validation for nw runtime (https://github.com/mapbox/node-pre-gyp/pull/181) via @fleg
## 0.6.25
- Improved support for auto-detection of electron runtime in `node-pre-gyp.find()`
- Pull request from @enlight - https://github.com/mapbox/node-pre-gyp/pull/187
- Add known node version for 4.4.1 and 5.9.1
## 0.6.24
- Add known node version for 5.8.0, 5.9.0, and 4.4.0.
## 0.6.23
- Add known node version for 0.10.43, 0.12.11, 4.3.2, and 5.7.1.
## 0.6.22
- Add known node version for 4.3.1, and 5.7.0.
## 0.6.21
- Add known node version for 0.10.42, 0.12.10, 4.3.0, and 5.6.0.
## 0.6.20
- Add known node version for 4.2.5, 4.2.6, 5.4.0, 5.4.1,and 5.5.0.
## 0.6.19
- Add known node version for 4.2.4
## 0.6.18
- Add new known node versions for 0.10.x, 0.12.x, 4.x, and 5.x
## 0.6.17
- Re-tagged to fix packaging problem of `Error: Cannot find module 'isarray'`
## 0.6.16
- Added known version in crosswalk for 5.1.0.
## 0.6.15
- Upgraded tar-pack (https://github.com/mapbox/node-pre-gyp/issues/182)
- Support custom binary hosting mirror (https://github.com/mapbox/node-pre-gyp/pull/170)
- Added known version in crosswalk for 4.2.2.
## 0.6.14
- Added node 5.x version
## 0.6.13
- Added more known node 4.x versions
## 0.6.12
- Added support for [Electron](http://electron.atom.io/). Just pass the `--runtime=electron` flag when building/installing. Thanks @zcbenz
## 0.6.11
- Added known node and io.js versions including more 3.x and 4.x versions
## 0.6.10
- Added known node and io.js versions including 3.x and 4.x versions
- Upgraded `tar` dep
## 0.6.9
- Upgraded `rc` dep
- Updated known io.js version: v2.4.0
## 0.6.8
- Upgraded `semver` and `rimraf` deps
- Updated known node and io.js versions
## 0.6.7
- Fixed `node_abi` versions for io.js 1.1.x -> 1.8.x (should be 43, but was stored as 42) (refs https://github.com/iojs/build/issues/94)
## 0.6.6
- Updated with known io.js 2.0.0 version
## 0.6.5
- Now respecting `npm_config_node_gyp` (https://github.com/npm/npm/pull/4887)
- Updated to semver@4.3.2
- Updated known node v0.12.x versions and io.js 1.x versions.
## 0.6.4
- Improved support for `io.js` (@fengmk2)
- Test coverage improvements (@mikemorris)
- Fixed support for `--dist-url` that regressed in 0.6.3
## 0.6.3
- Added support for passing raw options to node-gyp using `--` separator. Flags passed after
the `--` to `node-pre-gyp configure` will be passed directly to gyp while flags passed
after the `--` will be passed directly to make/visual studio.
- Added `node-pre-gyp configure` command to be able to call `node-gyp configure` directly
- Fix issue with require validation not working on windows 7 (@edgarsilva)
## 0.6.2
- Support for io.js >= v1.0.2
- Deferred require of `request` and `tar` to help speed up command line usage of `node-pre-gyp`.
## 0.6.1
- Fixed bundled `tar` version
## 0.6.0
- BREAKING: node odd releases like v0.11.x now use `major.minor.patch` for `{node_abi}` instead of `NODE_MODULE_VERSION` (#124)
- Added support for `toolset` option in versioning. By default is an empty string but `--toolset` can be passed to publish or install to select alternative binaries that target a custom toolset like C++11. For example to target Visual Studio 2014 modules like node-sqlite3 use `--toolset=v140`.
- Added support for `--no-rollback` option to request that a failed binary test does not remove the binary module leaves it in place.
- Added support for `--update-binary` option to request an existing binary be re-installed and the check for a valid local module be skipped.
- Added support for passing build options from `npm` through `node-pre-gyp` to `node-gyp`: `--nodedir`, `--disturl`, `--python`, and `--msvs_version`
## 0.5.31
- Added support for deducing node_abi for node.js runtime from previous release if the series is even
- Added support for --target=0.10.33
## 0.5.30
- Repackaged with latest bundled deps
## 0.5.29
- Added support for semver `build`.
- Fixed support for downloading from urls that include `+`.
## 0.5.28
- Now reporting unix style paths only in reveal command
## 0.5.27
- Fixed support for auto-detecting s3 bucket name when it contains `.` - @taavo
- Fixed support for installing when path contains a `'` - @halfdan
- Ported tests to mocha
## 0.5.26
- Fix node-webkit support when `--target` option is not provided
## 0.5.25
- Fix bundling of deps
## 0.5.24
- Updated ABI crosswalk to incldue node v0.10.30 and v0.10.31
## 0.5.23
- Added `reveal` command. Pass no options to get all versioning data as json. Pass a second arg to grab a single versioned property value
- Added support for `--silent` (shortcut for `--loglevel=silent`)
## 0.5.22
- Fixed node-webkit versioning name (NOTE: node-webkit support still experimental)
## 0.5.21
- New package to fix `shasum check failed` error with v0.5.20
## 0.5.20
- Now versioning node-webkit binaries based on major.minor.patch - assuming no compatible ABI across versions (#90)
## 0.5.19
- Updated to know about more node-webkit releases
## 0.5.18
- Updated to know about more node-webkit releases
## 0.5.17
- Updated to know about node v0.10.29 release
## 0.5.16
- Now supporting all aws-sdk configuration parameters (http://docs.aws.amazon.com/AWSJavaScriptSDK/guide/node-configuring.html) (#86)
## 0.5.15
- Fixed installation of windows packages sub directories on unix systems (#84)
## 0.5.14
- Finished support for cross building using `--target_platform` option (#82)
- Now skipping binary validation on install if target arch/platform do not match the host.
- Removed multi-arch validing for OS X since it required a FAT node.js binary
## 0.5.13
- Fix problem in 0.5.12 whereby the wrong versions of mkdirp and semver where bundled.
## 0.5.12
- Improved support for node-webkit (@Mithgol)
## 0.5.11
- Updated target versions listing
## 0.5.10
- Fixed handling of `-debug` flag passed directory to node-pre-gyp (#72)
- Added optional second arg to `node_pre_gyp.find` to customize the default versioning options used to locate the runtime binary
- Failed install due to `testbinary` check failure no longer leaves behind binary (#70)
## 0.5.9
- Fixed regression in `testbinary` command causing installs to fail on windows with 0.5.7 (#60)
## 0.5.8
- Started bundling deps
## 0.5.7
- Fixed the `testbinary` check, which is used to determine whether to re-download or source compile, to work even in complex dependency situations (#63)
- Exposed the internal `testbinary` command in node-pre-gyp command line tool
- Fixed minor bug so that `fallback_to_build` option is always respected
## 0.5.6
- Added support for versioning on the `name` value in `package.json` (#57).
- Moved to using streams for reading tarball when publishing (#52)
## 0.5.5
- Improved binary validation that also now works with node-webkit (@Mithgol)
- Upgraded test apps to work with node v0.11.x
- Improved test coverage
## 0.5.4
- No longer depends on external install of node-gyp for compiling builds.
## 0.5.3
- Reverted fix for debian/nodejs since it broke windows (#45)
## 0.5.2
- Support for debian systems where the node binary is named `nodejs` (#45)
- Added `bin/node-pre-gyp.cmd` to be able to run command on windows locally (npm creates an .npm automatically when globally installed)
- Updated abi-crosswalk with node v0.10.26 entry.
## 0.5.1
- Various minor bug fixes, several improving windows support for publishing.
## 0.5.0
- Changed property names in `binary` object: now required are `module_name`, `module_path`, and `host`.
- Now `module_path` supports versioning, which allows developers to opt-in to using a versioned install path (#18).
- Added `remote_path` which also supports versioning.
- Changed `remote_uri` to `host`.
## 0.4.2
- Added support for `--target` flag to request cross-compile against a specific node/node-webkit version.
- Added preliminary support for node-webkit
- Fixed support for `--target_arch` option being respected in all cases.
## 0.4.1
- Fixed exception when only stderr is available in binary test (@bendi / #31)
## 0.4.0
- Enforce only `https:` based remote publishing access.
- Added `node-pre-gyp info` command to display listing of published binaries
- Added support for changing the directory node-pre-gyp should build in with the `-C/--directory` option.
- Added support for S3 prefixes.
## 0.3.1
- Added `unpublish` command.
- Fixed module path construction in tests.
- Added ability to disable falling back to build behavior via `npm install --fallback-to-build=false` which overrides setting in a depedencies package.json `install` target.
## 0.3.0
- Support for packaging all files in `module_path` directory - see `app4` for example
- Added `testpackage` command.
- Changed `clean` command to only delete `.node` not entire `build` directory since node-gyp will handle that.
- `.node` modules must be in a folder of there own since tar-pack will remove everything when it unpacks.

View File

@@ -1,27 +0,0 @@
Copyright (c), Mapbox
All rights reserved.
Redistribution and use in source and binary forms, with or without modification,
are permitted provided that the following conditions are met:
* Redistributions of source code must retain the above copyright notice,
this list of conditions and the following disclaimer.
* Redistributions in binary form must reproduce the above copyright notice,
this list of conditions and the following disclaimer in the documentation
and/or other materials provided with the distribution.
* Neither the name of node-pre-gyp nor the names of its contributors
may be used to endorse or promote products derived from this software
without specific prior written permission.
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR
CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL,
EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO,
PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR
PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF
LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING
NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.

View File

@@ -1,742 +0,0 @@
# @mapbox/node-pre-gyp
#### @mapbox/node-pre-gyp makes it easy to publish and install Node.js C++ addons from binaries
[![Build Status](https://travis-ci.com/mapbox/node-pre-gyp.svg?branch=master)](https://travis-ci.com/mapbox/node-pre-gyp)
[![Build status](https://ci.appveyor.com/api/projects/status/3nxewb425y83c0gv)](https://ci.appveyor.com/project/Mapbox/node-pre-gyp)
`@mapbox/node-pre-gyp` stands between [npm](https://github.com/npm/npm) and [node-gyp](https://github.com/Tootallnate/node-gyp) and offers a cross-platform method of binary deployment.
### Special note on previous package
On Feb 9th, 2021 `@mapbox/node-pre-gyp@1.0.0` was [released](./CHANGELOG.md). Older, unscoped versions that are not part of the `@mapbox` org are deprecated and only `@mapbox/node-pre-gyp` will see updates going forward. To upgrade to the new package do:
```
npm uninstall node-pre-gyp --save
npm install @mapbox/node-pre-gyp --save
```
### Features
- A command line tool called `node-pre-gyp` that can install your package's C++ module from a binary.
- A variety of developer targeted commands for packaging, testing, and publishing binaries.
- A JavaScript module that can dynamically require your installed binary: `require('@mapbox/node-pre-gyp').find`
For a hello world example of a module packaged with `node-pre-gyp` see <https://github.com/springmeyer/node-addon-example> and [the wiki ](https://github.com/mapbox/node-pre-gyp/wiki/Modules-using-node-pre-gyp) for real world examples.
## Credits
- The module is modeled after [node-gyp](https://github.com/Tootallnate/node-gyp) by [@Tootallnate](https://github.com/Tootallnate)
- Motivation for initial development came from [@ErisDS](https://github.com/ErisDS) and the [Ghost Project](https://github.com/TryGhost/Ghost).
- Development is sponsored by [Mapbox](https://www.mapbox.com/)
## FAQ
See the [Frequently Ask Questions](https://github.com/mapbox/node-pre-gyp/wiki/FAQ).
## Depends
- Node.js >= node v8.x
## Install
`node-pre-gyp` is designed to be installed as a local dependency of your Node.js C++ addon and accessed like:
./node_modules/.bin/node-pre-gyp --help
But you can also install it globally:
npm install @mapbox/node-pre-gyp -g
## Usage
### Commands
View all possible commands:
node-pre-gyp --help
- clean - Remove the entire folder containing the compiled .node module
- install - Install pre-built binary for module
- reinstall - Run "clean" and "install" at once
- build - Compile the module by dispatching to node-gyp or nw-gyp
- rebuild - Run "clean" and "build" at once
- package - Pack binary into tarball
- testpackage - Test that the staged package is valid
- publish - Publish pre-built binary
- unpublish - Unpublish pre-built binary
- info - Fetch info on published binaries
You can also chain commands:
node-pre-gyp clean build unpublish publish info
### Options
Options include:
- `-C/--directory`: run the command in this directory
- `--build-from-source`: build from source instead of using pre-built binary
- `--update-binary`: reinstall by replacing previously installed local binary with remote binary
- `--runtime=node-webkit`: customize the runtime: `node`, `electron` and `node-webkit` are the valid options
- `--fallback-to-build`: fallback to building from source if pre-built binary is not available
- `--target=0.4.0`: Pass the target node or node-webkit version to compile against
- `--target_arch=ia32`: Pass the target arch and override the host `arch`. Valid values are 'ia32','x64', or `arm`.
- `--target_platform=win32`: Pass the target platform and override the host `platform`. Valid values are `linux`, `darwin`, `win32`, `sunos`, `freebsd`, `openbsd`, and `aix`.
Both `--build-from-source` and `--fallback-to-build` can be passed alone or they can provide values. You can pass `--fallback-to-build=false` to override the option as declared in package.json. In addition to being able to pass `--build-from-source` you can also pass `--build-from-source=myapp` where `myapp` is the name of your module.
For example: `npm install --build-from-source=myapp`. This is useful if:
- `myapp` is referenced in the package.json of a larger app and therefore `myapp` is being installed as a dependency with `npm install`.
- The larger app also depends on other modules installed with `node-pre-gyp`
- You only want to trigger a source compile for `myapp` and the other modules.
### Configuring
This is a guide to configuring your module to use node-pre-gyp.
#### 1) Add new entries to your `package.json`
- Add `@mapbox/node-pre-gyp` to `dependencies`
- Add `aws-sdk` as a `devDependency`
- Add a custom `install` script
- Declare a `binary` object
This looks like:
```js
"dependencies" : {
"@mapbox/node-pre-gyp": "1.x"
},
"devDependencies": {
"aws-sdk": "2.x"
}
"scripts": {
"install": "node-pre-gyp install --fallback-to-build"
},
"binary": {
"module_name": "your_module",
"module_path": "./lib/binding/",
"host": "https://your_module.s3-us-west-1.amazonaws.com"
}
```
For a full example see [node-addon-examples's package.json](https://github.com/springmeyer/node-addon-example/blob/master/package.json).
Let's break this down:
- Dependencies need to list `node-pre-gyp`
- Your devDependencies should list `aws-sdk` so that you can run `node-pre-gyp publish` locally or a CI system. We recommend using `devDependencies` only since `aws-sdk` is large and not needed for `node-pre-gyp install` since it only uses http to fetch binaries
- Your `scripts` section should override the `install` target with `"install": "node-pre-gyp install --fallback-to-build"`. This allows node-pre-gyp to be used instead of the default npm behavior of always source compiling with `node-gyp` directly.
- Your package.json should contain a `binary` section describing key properties you provide to allow node-pre-gyp to package optimally. They are detailed below.
Note: in the past we recommended putting `@mapbox/node-pre-gyp` in the `bundledDependencies`, but we no longer recommend this. In the past there were npm bugs (with node versions 0.10.x) that could lead to node-pre-gyp not being available at the right time during install (unless we bundled). This should no longer be the case. Also, for a time we recommended using `"preinstall": "npm install @mapbox/node-pre-gyp"` as an alternative method to avoid needing to bundle. But this did not behave predictably across all npm versions - see https://github.com/mapbox/node-pre-gyp/issues/260 for the details. So we do not recommend using `preinstall` to install `@mapbox/node-pre-gyp`. More history on this at https://github.com/strongloop/fsevents/issues/157#issuecomment-265545908.
##### The `binary` object has three required properties
###### module_name
The name of your native node module. This value must:
- Match the name passed to [the NODE_MODULE macro](http://nodejs.org/api/addons.html#addons_hello_world)
- Must be a valid C variable name (e.g. it cannot contain `-`)
- Should not include the `.node` extension.
###### module_path
The location your native module is placed after a build. This should be an empty directory without other Javascript files. This entire directory will be packaged in the binary tarball. When installing from a remote package this directory will be overwritten with the contents of the tarball.
Note: This property supports variables based on [Versioning](#versioning).
###### host
A url to the remote location where you've published tarball binaries (must be `https` not `http`).
It is highly recommended that you use Amazon S3. The reasons are:
- Various node-pre-gyp commands like `publish` and `info` only work with an S3 host.
- S3 is a very solid hosting platform for distributing large files.
- We provide detail documentation for using [S3 hosting](#s3-hosting) with node-pre-gyp.
Why then not require S3? Because while some applications using node-pre-gyp need to distribute binaries as large as 20-30 MB, others might have very small binaries and might wish to store them in a GitHub repo. This is not recommended, but if an author really wants to host in a non-S3 location then it should be possible.
It should also be mentioned that there is an optional and entirely separate npm module called [node-pre-gyp-github](https://github.com/bchr02/node-pre-gyp-github) which is intended to complement node-pre-gyp and be installed along with it. It provides the ability to store and publish your binaries within your repositories GitHub Releases if you would rather not use S3 directly. Installation and usage instructions can be found [here](https://github.com/bchr02/node-pre-gyp-github), but the basic premise is that instead of using the ```node-pre-gyp publish``` command you would use ```node-pre-gyp-github publish```.
##### The `binary` object other optional S3 properties
If you are not using a standard s3 path like `bucket_name.s3(.-)region.amazonaws.com`, you might get an error on `publish` because node-pre-gyp extracts the region and bucket from the `host` url. For example, you may have an on-premises s3-compatible storage server, or may have configured a specific dns redirecting to an s3 endpoint. In these cases, you can explicitly set the `region` and `bucket` properties to tell node-pre-gyp to use these values instead of guessing from the `host` property. The following values can be used in the `binary` section:
###### host
The url to the remote server root location (must be `https` not `http`).
###### bucket
The bucket name where your tarball binaries should be located.
###### region
Your S3 server region.
###### s3ForcePathStyle
Set `s3ForcePathStyle` to true if the endpoint url should not be prefixed with the bucket name. If false (default), the server endpoint would be constructed as `bucket_name.your_server.com`.
##### The `binary` object has optional properties
###### remote_path
It **is recommended** that you customize this property. This is an extra path to use for publishing and finding remote tarballs. The default value for `remote_path` is `""` meaning that if you do not provide it then all packages will be published at the base of the `host`. It is recommended to provide a value like `./{name}/v{version}` to help organize remote packages in the case that you choose to publish multiple node addons to the same `host`.
Note: This property supports variables based on [Versioning](#versioning).
###### package_name
It is **not recommended** to override this property unless you are also overriding the `remote_path`. This is the versioned name of the remote tarball containing the binary `.node` module and any supporting files you've placed inside the `module_path` directory. Unless you specify `package_name` in your `package.json` then it defaults to `{module_name}-v{version}-{node_abi}-{platform}-{arch}.tar.gz` which allows your binary to work across node versions, platforms, and architectures. If you are using `remote_path` that is also versioned by `./{module_name}/v{version}` then you could remove these variables from the `package_name` and just use: `{node_abi}-{platform}-{arch}.tar.gz`. Then your remote tarball will be looked up at, for example, `https://example.com/your-module/v0.1.0/node-v11-linux-x64.tar.gz`.
Avoiding the version of your module in the `package_name` and instead only embedding in a directory name can be useful when you want to make a quick tag of your module that does not change any C++ code. In this case you can just copy binaries to the new version behind the scenes like:
```sh
aws s3 sync --acl public-read s3://mapbox-node-binary/sqlite3/v3.0.3/ s3://mapbox-node-binary/sqlite3/v3.0.4/
```
Note: This property supports variables based on [Versioning](#versioning).
#### 2) Add a new target to binding.gyp
`node-pre-gyp` calls out to `node-gyp` to compile the module and passes variables along like [module_name](#module_name) and [module_path](#module_path).
A new target must be added to `binding.gyp` that moves the compiled `.node` module from `./build/Release/module_name.node` into the directory specified by `module_path`.
Add a target like this at the end of your `targets` list:
```js
{
"target_name": "action_after_build",
"type": "none",
"dependencies": [ "<(module_name)" ],
"copies": [
{
"files": [ "<(PRODUCT_DIR)/<(module_name).node" ],
"destination": "<(module_path)"
}
]
}
```
For a full example see [node-addon-example's binding.gyp](https://github.com/springmeyer/node-addon-example/blob/2ff60a8ded7f042864ad21db00c3a5a06cf47075/binding.gyp).
#### 3) Dynamically require your `.node`
Inside the main js file that requires your addon module you are likely currently doing:
```js
var binding = require('../build/Release/binding.node');
```
or:
```js
var bindings = require('./bindings')
```
Change those lines to:
```js
var binary = require('@mapbox/node-pre-gyp');
var path = require('path');
var binding_path = binary.find(path.resolve(path.join(__dirname,'./package.json')));
var binding = require(binding_path);
```
For a full example see [node-addon-example's index.js](https://github.com/springmeyer/node-addon-example/blob/2ff60a8ded7f042864ad21db00c3a5a06cf47075/index.js#L1-L4)
#### 4) Build and package your app
Now build your module from source:
npm install --build-from-source
The `--build-from-source` tells `node-pre-gyp` to not look for a remote package and instead dispatch to node-gyp to build.
Now `node-pre-gyp` should now also be installed as a local dependency so the command line tool it offers can be found at `./node_modules/.bin/node-pre-gyp`.
#### 5) Test
Now `npm test` should work just as it did before.
#### 6) Publish the tarball
Then package your app:
./node_modules/.bin/node-pre-gyp package
Once packaged, now you can publish:
./node_modules/.bin/node-pre-gyp publish
Currently the `publish` command pushes your binary to S3. This requires:
- You have installed `aws-sdk` with `npm install aws-sdk`
- You have created a bucket already.
- The `host` points to an S3 http or https endpoint.
- You have configured node-pre-gyp to read your S3 credentials (see [S3 hosting](#s3-hosting) for details).
You can also host your binaries elsewhere. To do this requires:
- You manually publish the binary created by the `package` command to an `https` endpoint
- Ensure that the `host` value points to your custom `https` endpoint.
#### 7) Automate builds
Now you need to publish builds for all the platforms and node versions you wish to support. This is best automated.
- See [Appveyor Automation](#appveyor-automation) for how to auto-publish builds on Windows.
- See [Travis Automation](#travis-automation) for how to auto-publish builds on OS X and Linux.
#### 8) You're done!
Now publish your module to the npm registry. Users will now be able to install your module from a binary.
What will happen is this:
1. `npm install <your package>` will pull from the npm registry
2. npm will run the `install` script which will call out to `node-pre-gyp`
3. `node-pre-gyp` will fetch the binary `.node` module and unpack in the right place
4. Assuming that all worked, you are done
If a a binary was not available for a given platform and `--fallback-to-build` was used then `node-gyp rebuild` will be called to try to source compile the module.
#### 9) One more option
It may be that you want to work with two s3 buckets, one for staging and one for production; this
arrangement makes it less likely to accidentally overwrite a production binary. It also allows the production
environment to have more restrictive permissions than staging while still enabling publishing when
developing and testing.
The binary.host property can be set at execution time. In order to do so all of the following conditions
must be true.
- binary.host is falsey or not present
- binary.staging_host is not empty
- binary.production_host is not empty
If any of these checks fail then the operation will not perform execution time determination of the s3 target.
If the command being executed is either "publish" or "unpublish" then the default is set to `binary.staging_host`. In all other cases
the default is `binary.production_host`.
The command-line options `--s3_host=staging` or `--s3_host=production` override the default. If `s3_host`
is present and not `staging` or `production` an exception is thrown.
This allows installing from staging by specifying `--s3_host=staging`. And it requires specifying
`--s3_option=production` in order to publish to, or unpublish from, production, making accidental errors less likely.
## Node-API Considerations
[Node-API](https://nodejs.org/api/n-api.html#n_api_node_api), which was previously known as N-API, is an ABI-stable alternative to previous technologies such as [nan](https://github.com/nodejs/nan) which are tied to a specific Node runtime engine. Node-API is Node runtime engine agnostic and guarantees modules created today will continue to run, without changes, into the future.
Using `node-pre-gyp` with Node-API projects requires a handful of additional configuration values and imposes some additional requirements.
The most significant difference is that an Node-API module can be coded to target multiple Node-API versions. Therefore, an Node-API module must declare in its `package.json` file which Node-API versions the module is designed to run against. In addition, since multiple builds may be required for a single module, path and file names must be specified in way that avoids naming conflicts.
### The `napi_versions` array property
A Node-API module must declare in its `package.json` file, the Node-API versions the module is intended to support. This is accomplished by including an `napi-versions` array property in the `binary` object. For example:
```js
"binary": {
"module_name": "your_module",
"module_path": "your_module_path",
"host": "https://your_bucket.s3-us-west-1.amazonaws.com",
"napi_versions": [1,3]
}
```
If the `napi_versions` array property is *not* present, `node-pre-gyp` operates as it always has. Including the `napi_versions` array property instructs `node-pre-gyp` that this is a Node-API module build.
When the `napi_versions` array property is present, `node-pre-gyp` fires off multiple operations, one for each of the Node-API versions in the array. In the example above, two operations are initiated, one for Node-API version 1 and second for Node-API version 3. How this version number is communicated is described next.
### The `napi_build_version` value
For each of the Node-API module operations `node-pre-gyp` initiates, it ensures that the `napi_build_version` is set appropriately.
This value is of importance in two areas:
1. The C/C++ code which needs to know against which Node-API version it should compile.
2. `node-pre-gyp` itself which must assign appropriate path and file names to avoid collisions.
### Defining `NAPI_VERSION` for the C/C++ code
The `napi_build_version` value is communicated to the C/C++ code by adding this code to the `binding.gyp` file:
```
"defines": [
"NAPI_VERSION=<(napi_build_version)",
]
```
This ensures that `NAPI_VERSION`, an integer value, is declared appropriately to the C/C++ code for each build.
> Note that earlier versions of this document recommended defining the symbol `NAPI_BUILD_VERSION`. `NAPI_VERSION` is preferred because it used by the Node-API C/C++ headers to configure the specific Node-API versions being requested.
### Path and file naming requirements in `package.json`
Since `node-pre-gyp` fires off multiple operations for each request, it is essential that path and file names be created in such a way as to avoid collisions. This is accomplished by imposing additional path and file naming requirements.
Specifically, when performing Node-API builds, the `{napi_build_version}` text configuration value *must* be present in the `module_path` property. In addition, the `{napi_build_version}` text configuration value *must* be present in either the `remote_path` or `package_name` property. (No problem if it's in both.)
Here's an example:
```js
"binary": {
"module_name": "your_module",
"module_path": "./lib/binding/napi-v{napi_build_version}",
"remote_path": "./{module_name}/v{version}/{configuration}/",
"package_name": "{platform}-{arch}-napi-v{napi_build_version}.tar.gz",
"host": "https://your_bucket.s3-us-west-1.amazonaws.com",
"napi_versions": [1,3]
}
```
## Supporting both Node-API and NAN builds
You may have a legacy native add-on that you wish to continue supporting for those versions of Node that do not support Node-API, as you add Node-API support for later Node versions. This can be accomplished by specifying the `node_napi_label` configuration value in the package.json `binary.package_name` property.
Placing the configuration value `node_napi_label` in the package.json `binary.package_name` property instructs `node-pre-gyp` to build all viable Node-API binaries supported by the current Node instance. If the current Node instance does not support Node-API, `node-pre-gyp` will request a traditional, non-Node-API build.
The configuration value `node_napi_label` is set by `node-pre-gyp` to the type of build created, `napi` or `node`, and the version number. For Node-API builds, the string contains the Node-API version nad has values like `napi-v3`. For traditional, non-Node-API builds, the string contains the ABI version with values like `node-v46`.
Here's how the `binary` configuration above might be changed to support both Node-API and NAN builds:
```js
"binary": {
"module_name": "your_module",
"module_path": "./lib/binding/{node_napi_label}",
"remote_path": "./{module_name}/v{version}/{configuration}/",
"package_name": "{platform}-{arch}-{node_napi_label}.tar.gz",
"host": "https://your_bucket.s3-us-west-1.amazonaws.com",
"napi_versions": [1,3]
}
```
The C/C++ symbol `NAPI_VERSION` can be used to distinguish Node-API and non-Node-API builds. The value of `NAPI_VERSION` is set to the integer Node-API version for Node-API builds and is set to `0` for non-Node-API builds.
For example:
```C
#if NAPI_VERSION
// Node-API code goes here
#else
// NAN code goes here
#endif
```
### Two additional configuration values
The following two configuration values, which were implemented in previous versions of `node-pre-gyp`, continue to exist, but have been replaced by the `node_napi_label` configuration value described above.
1. `napi_version` If Node-API is supported by the currently executing Node instance, this value is the Node-API version number supported by Node. If Node-API is not supported, this value is an empty string.
2. `node_abi_napi` If the value returned for `napi_version` is non empty, this value is `'napi'`. If the value returned for `napi_version` is empty, this value is the value returned for `node_abi`.
These values are present for use in the `binding.gyp` file and may be used as `{napi_version}` and `{node_abi_napi}` for text substituion in the `binary` properties of the `package.json` file.
## S3 Hosting
You can host wherever you choose but S3 is cheap, `node-pre-gyp publish` expects it, and S3 can be integrated well with [Travis.ci](http://travis-ci.org) to automate builds for OS X and Ubuntu, and with [Appveyor](http://appveyor.com) to automate builds for Windows. Here is an approach to do this:
First, get setup locally and test the workflow:
#### 1) Create an S3 bucket
And have your **key** and **secret key** ready for writing to the bucket.
It is recommended to create a IAM user with a policy that only gives permissions to the specific bucket you plan to publish to. This can be done in the [IAM console](https://console.aws.amazon.com/iam/) by: 1) adding a new user, 2) choosing `Attach User Policy`, 3) Using the `Policy Generator`, 4) selecting `Amazon S3` for the service, 5) adding the actions: `DeleteObject`, `GetObject`, `GetObjectAcl`, `ListBucket`, `HeadBucket`, `PutObject`, `PutObjectAcl`, 6) adding an ARN of `arn:aws:s3:::bucket/*` (replacing `bucket` with your bucket name), and finally 7) clicking `Add Statement` and saving the policy. It should generate a policy like:
```js
{
"Version": "2012-10-17",
"Statement": [
{
"Sid": "objects",
"Effect": "Allow",
"Action": [
"s3:PutObject",
"s3:GetObjectAcl",
"s3:GetObject",
"s3:DeleteObject",
"s3:PutObjectAcl"
],
"Resource": "arn:aws:s3:::your-bucket-name/*"
},
{
"Sid": "bucket",
"Effect": "Allow",
"Action": "s3:ListBucket",
"Resource": "arn:aws:s3:::your-bucket-name"
},
{
"Sid": "buckets",
"Effect": "Allow",
"Action": "s3:HeadBucket",
"Resource": "*"
}
]
}
```
#### 2) Install node-pre-gyp
Either install it globally:
npm install node-pre-gyp -g
Or put the local version on your PATH
export PATH=`pwd`/node_modules/.bin/:$PATH
#### 3) Configure AWS credentials
It is recommended to configure the AWS JS SDK v2 used internally by `node-pre-gyp` by setting these environment variables:
- AWS_ACCESS_KEY_ID
- AWS_SECRET_ACCESS_KEY
But also you can also use the `Shared Config File` mentioned [in the AWS JS SDK v2 docs](https://docs.aws.amazon.com/sdk-for-javascript/v2/developer-guide/configuring-the-jssdk.html)
#### 4) Package and publish your build
Install the `aws-sdk`:
npm install aws-sdk
Then publish:
node-pre-gyp package publish
Note: if you hit an error like `Hostname/IP doesn't match certificate's altnames` it may mean that you need to provide the `region` option in your config.
## Appveyor Automation
[Appveyor](http://www.appveyor.com/) can build binaries and publish the results per commit and supports:
- Windows Visual Studio 2013 and related compilers
- Both 64 bit (x64) and 32 bit (x86) build configurations
- Multiple Node.js versions
For an example of doing this see [node-sqlite3's appveyor.yml](https://github.com/mapbox/node-sqlite3/blob/master/appveyor.yml).
Below is a guide to getting set up:
#### 1) Create a free Appveyor account
Go to https://ci.appveyor.com/signup/free and sign in with your GitHub account.
#### 2) Create a new project
Go to https://ci.appveyor.com/projects/new and select the GitHub repo for your module
#### 3) Add appveyor.yml and push it
Once you have committed an `appveyor.yml` ([appveyor.yml reference](http://www.appveyor.com/docs/appveyor-yml)) to your GitHub repo and pushed it AppVeyor should automatically start building your project.
#### 4) Create secure variables
Encrypt your S3 AWS keys by going to <https://ci.appveyor.com/tools/encrypt> and hitting the `encrypt` button.
Then paste the result into your `appveyor.yml`
```yml
environment:
AWS_ACCESS_KEY_ID:
secure: Dn9HKdLNYvDgPdQOzRq/DqZ/MPhjknRHB1o+/lVU8MA=
AWS_SECRET_ACCESS_KEY:
secure: W1rwNoSnOku1r+28gnoufO8UA8iWADmL1LiiwH9IOkIVhDTNGdGPJqAlLjNqwLnL
```
NOTE: keys are per account but not per repo (this is difference than Travis where keys are per repo but not related to the account used to encrypt them).
#### 5) Hook up publishing
Just put `node-pre-gyp package publish` in your `appveyor.yml` after `npm install`.
#### 6) Publish when you want
You might wish to publish binaries only on a specific commit. To do this you could borrow from the [Travis CI idea of commit keywords](http://about.travis-ci.org/docs/user/how-to-skip-a-build/) and add special handling for commit messages with `[publish binary]`:
SET CM=%APPVEYOR_REPO_COMMIT_MESSAGE%
if not "%CM%" == "%CM:[publish binary]=%" node-pre-gyp --msvs_version=2013 publish
If your commit message contains special characters (e.g. `&`) this method might fail. An alternative is to use PowerShell, which gives you additional possibilities, like ignoring case by using `ToLower()`:
ps: if($env:APPVEYOR_REPO_COMMIT_MESSAGE.ToLower().Contains('[publish binary]')) { node-pre-gyp --msvs_version=2013 publish }
Remember this publishing is not the same as `npm publish`. We're just talking about the binary module here and not your entire npm package.
## Travis Automation
[Travis](https://travis-ci.org/) can push to S3 after a successful build and supports both:
- Ubuntu Precise and OS X (64 bit)
- Multiple Node.js versions
For an example of doing this see [node-add-example's .travis.yml](https://github.com/springmeyer/node-addon-example/blob/2ff60a8ded7f042864ad21db00c3a5a06cf47075/.travis.yml).
Note: if you need 32 bit binaries, this can be done from a 64 bit Travis machine. See [the node-sqlite3 scripts for an example of doing this](https://github.com/mapbox/node-sqlite3/blob/bae122aa6a2b8a45f6b717fab24e207740e32b5d/scripts/build_against_node.sh#L54-L74).
Below is a guide to getting set up:
#### 1) Install the Travis gem
gem install travis
#### 2) Create secure variables
Make sure you run this command from within the directory of your module.
Use `travis-encrypt` like:
travis encrypt AWS_ACCESS_KEY_ID=${node_pre_gyp_accessKeyId}
travis encrypt AWS_SECRET_ACCESS_KEY=${node_pre_gyp_secretAccessKey}
Then put those values in your `.travis.yml` like:
```yaml
env:
global:
- secure: F+sEL/v56CzHqmCSSES4pEyC9NeQlkoR0Gs/ZuZxX1ytrj8SKtp3MKqBj7zhIclSdXBz4Ev966Da5ctmcTd410p0b240MV6BVOkLUtkjZJyErMBOkeb8n8yVfSoeMx8RiIhBmIvEn+rlQq+bSFis61/JkE9rxsjkGRZi14hHr4M=
- secure: o2nkUQIiABD139XS6L8pxq3XO5gch27hvm/gOdV+dzNKc/s2KomVPWcOyXNxtJGhtecAkABzaW8KHDDi5QL1kNEFx6BxFVMLO8rjFPsMVaBG9Ks6JiDQkkmrGNcnVdxI/6EKTLHTH5WLsz8+J7caDBzvKbEfTux5EamEhxIWgrI=
```
More details on Travis encryption at http://about.travis-ci.org/docs/user/encryption-keys/.
#### 3) Hook up publishing
Just put `node-pre-gyp package publish` in your `.travis.yml` after `npm install`.
##### OS X publishing
If you want binaries for OS X in addition to linux you can enable [multi-os for Travis](http://docs.travis-ci.com/user/multi-os/#Setting-.travis.yml)
Use a configuration like:
```yml
language: cpp
os:
- linux
- osx
env:
matrix:
- NODE_VERSION="4"
- NODE_VERSION="6"
before_install:
- rm -rf ~/.nvm/ && git clone --depth 1 https://github.com/creationix/nvm.git ~/.nvm
- source ~/.nvm/nvm.sh
- nvm install $NODE_VERSION
- nvm use $NODE_VERSION
```
See [Travis OS X Gotchas](#travis-os-x-gotchas) for why we replace `language: node_js` and `node_js:` sections with `language: cpp` and a custom matrix.
Also create platform specific sections for any deps that need install. For example if you need libpng:
```yml
- if [ $(uname -s) == 'Linux' ]; then apt-get install libpng-dev; fi;
- if [ $(uname -s) == 'Darwin' ]; then brew install libpng; fi;
```
For detailed multi-OS examples see [node-mapnik](https://github.com/mapnik/node-mapnik/blob/master/.travis.yml) and [node-sqlite3](https://github.com/mapbox/node-sqlite3/blob/master/.travis.yml).
##### Travis OS X Gotchas
First, unlike the Travis Linux machines, the OS X machines do not put `node-pre-gyp` on PATH by default. To do so you will need to:
```sh
export PATH=$(pwd)/node_modules/.bin:${PATH}
```
Second, the OS X machines do not support using a matrix for installing different Node.js versions. So you need to bootstrap the installation of Node.js in a cross platform way.
By doing:
```yml
env:
matrix:
- NODE_VERSION="4"
- NODE_VERSION="6"
before_install:
- rm -rf ~/.nvm/ && git clone --depth 1 https://github.com/creationix/nvm.git ~/.nvm
- source ~/.nvm/nvm.sh
- nvm install $NODE_VERSION
- nvm use $NODE_VERSION
```
You can easily recreate the previous behavior of this matrix:
```yml
node_js:
- "4"
- "6"
```
#### 4) Publish when you want
You might wish to publish binaries only on a specific commit. To do this you could borrow from the [Travis CI idea of commit keywords](http://about.travis-ci.org/docs/user/how-to-skip-a-build/) and add special handling for commit messages with `[publish binary]`:
COMMIT_MESSAGE=$(git log --format=%B --no-merges -n 1 | tr -d '\n')
if [[ ${COMMIT_MESSAGE} =~ "[publish binary]" ]]; then node-pre-gyp publish; fi;
Then you can trigger new binaries to be built like:
git commit -a -m "[publish binary]"
Or, if you don't have any changes to make simply run:
git commit --allow-empty -m "[publish binary]"
WARNING: if you are working in a pull request and publishing binaries from there then you will want to avoid double publishing when Travis CI builds both the `push` and `pr`. You only want to run the publish on the `push` commit. See https://github.com/Project-OSRM/node-osrm/blob/8eb837abe2e2e30e595093d16e5354bc5c573575/scripts/is_pr_merge.sh which is called from https://github.com/Project-OSRM/node-osrm/blob/8eb837abe2e2e30e595093d16e5354bc5c573575/scripts/publish.sh for an example of how to do this.
Remember this publishing is not the same as `npm publish`. We're just talking about the binary module here and not your entire npm package. To automate the publishing of your entire package to npm on Travis see http://about.travis-ci.org/docs/user/deployment/npm/
# Versioning
The `binary` properties of `module_path`, `remote_path`, and `package_name` support variable substitution. The strings are evaluated by `node-pre-gyp` depending on your system and any custom build flags you passed.
- `node_abi`: The node C++ `ABI` number. This value is available in Javascript as `process.versions.modules` as of [`>= v0.10.4 >= v0.11.7`](https://github.com/joyent/node/commit/ccabd4a6fa8a6eb79d29bc3bbe9fe2b6531c2d8e) and in C++ as the `NODE_MODULE_VERSION` define much earlier. For versions of Node before this was available we fallback to the V8 major and minor version.
- `platform` matches node's `process.platform` like `linux`, `darwin`, and `win32` unless the user passed the `--target_platform` option to override.
- `arch` matches node's `process.arch` like `x64` or `ia32` unless the user passes the `--target_arch` option to override.
- `libc` matches `require('detect-libc').family` like `glibc` or `musl` unless the user passes the `--target_libc` option to override.
- `configuration` - Either 'Release' or 'Debug' depending on if `--debug` is passed during the build.
- `module_name` - the `binary.module_name` attribute from `package.json`.
- `version` - the semver `version` value for your module from `package.json` (NOTE: ignores the `semver.build` property).
- `major`, `minor`, `patch`, and `prelease` match the individual semver values for your module's `version`
- `build` - the sevmer `build` value. For example it would be `this.that` if your package.json `version` was `v1.0.0+this.that`
- `prerelease` - the semver `prerelease` value. For example it would be `alpha.beta` if your package.json `version` was `v1.0.0-alpha.beta`
The options are visible in the code at <https://github.com/mapbox/node-pre-gyp/blob/612b7bca2604508d881e1187614870ba19a7f0c5/lib/util/versioning.js#L114-L127>
# Download binary files from a mirror
S3 is broken in China for the well known reason.
Using the `npm` config argument: `--{module_name}_binary_host_mirror` can download binary files through a mirror, `-` in `module_name` will be replaced with `_`.
e.g.: Install [v8-profiler](https://www.npmjs.com/package/v8-profiler) from `npm`.
```bash
$ npm install v8-profiler --profiler_binary_host_mirror=https://npm.taobao.org/mirrors/node-inspector/
```
e.g.: Install [canvas-prebuilt](https://www.npmjs.com/package/canvas-prebuilt) from `npm`.
```bash
$ npm install canvas-prebuilt --canvas_prebuilt_binary_host_mirror=https://npm.taobao.org/mirrors/canvas-prebuilt/
```

View File

@@ -1,4 +0,0 @@
#!/usr/bin/env node
'use strict';
require('../lib/main');

View File

@@ -1,2 +0,0 @@
@echo off
node "%~dp0\node-pre-gyp" %*

View File

@@ -1,10 +0,0 @@
# Contributing
### Releasing a new version:
- Ensure tests are passing on travis and appveyor
- Run `node scripts/abi_crosswalk.js` and commit any changes
- Update the changelog
- Tag a new release like: `git tag -a v0.6.34 -m "tagging v0.6.34" && git push --tags`
- Run `npm publish`

View File

@@ -1,51 +0,0 @@
'use strict';
module.exports = exports = build;
exports.usage = 'Attempts to compile the module by dispatching to node-gyp or nw-gyp';
const napi = require('./util/napi.js');
const compile = require('./util/compile.js');
const handle_gyp_opts = require('./util/handle_gyp_opts.js');
const configure = require('./configure.js');
function do_build(gyp, argv, callback) {
handle_gyp_opts(gyp, argv, (err, result) => {
let final_args = ['build'].concat(result.gyp).concat(result.pre);
if (result.unparsed.length > 0) {
final_args = final_args.
concat(['--']).
concat(result.unparsed);
}
if (!err && result.opts.napi_build_version) {
napi.swap_build_dir_in(result.opts.napi_build_version);
}
compile.run_gyp(final_args, result.opts, (err2) => {
if (result.opts.napi_build_version) {
napi.swap_build_dir_out(result.opts.napi_build_version);
}
return callback(err2);
});
});
}
function build(gyp, argv, callback) {
// Form up commands to pass to node-gyp:
// We map `node-pre-gyp build` to `node-gyp configure build` so that we do not
// trigger a clean and therefore do not pay the penalty of a full recompile
if (argv.length && (argv.indexOf('rebuild') > -1)) {
argv.shift(); // remove `rebuild`
// here we map `node-pre-gyp rebuild` to `node-gyp rebuild` which internally means
// "clean + configure + build" and triggers a full recompile
compile.run_gyp(['clean'], {}, (err3) => {
if (err3) return callback(err3);
configure(gyp, argv, (err4) => {
if (err4) return callback(err4);
return do_build(gyp, argv, callback);
});
});
} else {
return do_build(gyp, argv, callback);
}
}

View File

@@ -1,31 +0,0 @@
'use strict';
module.exports = exports = clean;
exports.usage = 'Removes the entire folder containing the compiled .node module';
const rm = require('rimraf');
const exists = require('fs').exists || require('path').exists;
const versioning = require('./util/versioning.js');
const napi = require('./util/napi.js');
const path = require('path');
function clean(gyp, argv, callback) {
const package_json = gyp.package_json;
const napi_build_version = napi.get_napi_build_version_from_command_args(argv);
const opts = versioning.evaluate(package_json, gyp.opts, napi_build_version);
const to_delete = opts.module_path;
if (!to_delete) {
return callback(new Error('module_path is empty, refusing to delete'));
} else if (path.normalize(to_delete) === path.normalize(process.cwd())) {
return callback(new Error('module_path is not set, refusing to delete'));
} else {
exists(to_delete, (found) => {
if (found) {
if (!gyp.opts.silent_clean) console.log('[' + package_json.name + '] Removing "%s"', to_delete);
return rm(to_delete, callback);
}
return callback();
});
}
}

View File

@@ -1,52 +0,0 @@
'use strict';
module.exports = exports = configure;
exports.usage = 'Attempts to configure node-gyp or nw-gyp build';
const napi = require('./util/napi.js');
const compile = require('./util/compile.js');
const handle_gyp_opts = require('./util/handle_gyp_opts.js');
function configure(gyp, argv, callback) {
handle_gyp_opts(gyp, argv, (err, result) => {
let final_args = result.gyp.concat(result.pre);
// pull select node-gyp configure options out of the npm environ
const known_gyp_args = ['dist-url', 'python', 'nodedir', 'msvs_version'];
known_gyp_args.forEach((key) => {
const val = gyp.opts[key] || gyp.opts[key.replace('-', '_')];
if (val) {
final_args.push('--' + key + '=' + val);
}
});
// --ensure=false tell node-gyp to re-install node development headers
// but it is only respected by node-gyp install, so we have to call install
// as a separate step if the user passes it
if (gyp.opts.ensure === false) {
const install_args = final_args.concat(['install', '--ensure=false']);
compile.run_gyp(install_args, result.opts, (err2) => {
if (err2) return callback(err2);
if (result.unparsed.length > 0) {
final_args = final_args.
concat(['--']).
concat(result.unparsed);
}
compile.run_gyp(['configure'].concat(final_args), result.opts, (err3) => {
return callback(err3);
});
});
} else {
if (result.unparsed.length > 0) {
final_args = final_args.
concat(['--']).
concat(result.unparsed);
}
compile.run_gyp(['configure'].concat(final_args), result.opts, (err4) => {
if (!err4 && result.opts.napi_build_version) {
napi.swap_build_dir_out(result.opts.napi_build_version);
}
return callback(err4);
});
}
});
}

View File

@@ -1,38 +0,0 @@
'use strict';
module.exports = exports = info;
exports.usage = 'Lists all published binaries (requires aws-sdk)';
const log = require('npmlog');
const versioning = require('./util/versioning.js');
const s3_setup = require('./util/s3_setup.js');
function info(gyp, argv, callback) {
const package_json = gyp.package_json;
const opts = versioning.evaluate(package_json, gyp.opts);
const config = {};
s3_setup.detect(opts, config);
const s3 = s3_setup.get_s3(config);
const s3_opts = {
Bucket: config.bucket,
Prefix: config.prefix
};
s3.listObjects(s3_opts, (err, meta) => {
if (err && err.code === 'NotFound') {
return callback(new Error('[' + package_json.name + '] Not found: https://' + s3_opts.Bucket + '.s3.amazonaws.com/' + config.prefix));
} else if (err) {
return callback(err);
} else {
log.verbose(JSON.stringify(meta, null, 1));
if (meta && meta.Contents) {
meta.Contents.forEach((obj) => {
console.log(obj.Key);
});
} else {
console.error('[' + package_json.name + '] No objects found at https://' + s3_opts.Bucket + '.s3.amazonaws.com/' + config.prefix);
}
return callback();
}
});
}

View File

@@ -1,235 +0,0 @@
'use strict';
module.exports = exports = install;
exports.usage = 'Attempts to install pre-built binary for module';
const fs = require('fs');
const path = require('path');
const log = require('npmlog');
const existsAsync = fs.exists || path.exists;
const versioning = require('./util/versioning.js');
const napi = require('./util/napi.js');
const makeDir = require('make-dir');
// for fetching binaries
const fetch = require('node-fetch');
const tar = require('tar');
let npgVersion = 'unknown';
try {
// Read own package.json to get the current node-pre-pyp version.
const ownPackageJSON = fs.readFileSync(path.join(__dirname, '..', 'package.json'), 'utf8');
npgVersion = JSON.parse(ownPackageJSON).version;
} catch (e) {
// do nothing
}
function place_binary(uri, targetDir, opts, callback) {
log.http('GET', uri);
// Try getting version info from the currently running npm.
const envVersionInfo = process.env.npm_config_user_agent ||
'node ' + process.version;
const sanitized = uri.replace('+', '%2B');
const requestOpts = {
uri: sanitized,
headers: {
'User-Agent': 'node-pre-gyp (v' + npgVersion + ', ' + envVersionInfo + ')'
},
follow_max: 10
};
if (opts.cafile) {
try {
requestOpts.ca = fs.readFileSync(opts.cafile);
} catch (e) {
return callback(e);
}
} else if (opts.ca) {
requestOpts.ca = opts.ca;
}
const proxyUrl = opts.proxy ||
process.env.http_proxy ||
process.env.HTTP_PROXY ||
process.env.npm_config_proxy;
let agent;
if (proxyUrl) {
const ProxyAgent = require('https-proxy-agent');
agent = new ProxyAgent(proxyUrl);
log.http('download', 'proxy agent configured using: "%s"', proxyUrl);
}
fetch(sanitized, { agent })
.then((res) => {
if (!res.ok) {
throw new Error(`response status ${res.status} ${res.statusText} on ${sanitized}`);
}
const dataStream = res.body;
return new Promise((resolve, reject) => {
let extractions = 0;
const countExtractions = (entry) => {
extractions += 1;
log.info('install', 'unpacking %s', entry.path);
};
dataStream.pipe(extract(targetDir, countExtractions))
.on('error', (e) => {
reject(e);
});
dataStream.on('end', () => {
resolve(`extracted file count: ${extractions}`);
});
dataStream.on('error', (e) => {
reject(e);
});
});
})
.then((text) => {
log.info(text);
callback();
})
.catch((e) => {
log.error(`install ${e.message}`);
callback(e);
});
}
function extract(to, onentry) {
return tar.extract({
cwd: to,
strip: 1,
onentry
});
}
function extract_from_local(from, targetDir, callback) {
if (!fs.existsSync(from)) {
return callback(new Error('Cannot find file ' + from));
}
log.info('Found local file to extract from ' + from);
// extract helpers
let extractCount = 0;
function countExtractions(entry) {
extractCount += 1;
log.info('install', 'unpacking ' + entry.path);
}
function afterExtract(err) {
if (err) return callback(err);
if (extractCount === 0) {
return callback(new Error('There was a fatal problem while extracting the tarball'));
}
log.info('tarball', 'done parsing tarball');
callback();
}
fs.createReadStream(from).pipe(extract(targetDir, countExtractions))
.on('close', afterExtract)
.on('error', afterExtract);
}
function do_build(gyp, argv, callback) {
const args = ['rebuild'].concat(argv);
gyp.todo.push({ name: 'build', args: args });
process.nextTick(callback);
}
function print_fallback_error(err, opts, package_json) {
const fallback_message = ' (falling back to source compile with node-gyp)';
let full_message = '';
if (err.statusCode !== undefined) {
// If we got a network response it but failed to download
// it means remote binaries are not available, so let's try to help
// the user/developer with the info to debug why
full_message = 'Pre-built binaries not found for ' + package_json.name + '@' + package_json.version;
full_message += ' and ' + opts.runtime + '@' + (opts.target || process.versions.node) + ' (' + opts.node_abi + ' ABI, ' + opts.libc + ')';
full_message += fallback_message;
log.warn('Tried to download(' + err.statusCode + '): ' + opts.hosted_tarball);
log.warn(full_message);
log.http(err.message);
} else {
// If we do not have a statusCode that means an unexpected error
// happened and prevented an http response, so we output the exact error
full_message = 'Pre-built binaries not installable for ' + package_json.name + '@' + package_json.version;
full_message += ' and ' + opts.runtime + '@' + (opts.target || process.versions.node) + ' (' + opts.node_abi + ' ABI, ' + opts.libc + ')';
full_message += fallback_message;
log.warn(full_message);
log.warn('Hit error ' + err.message);
}
}
//
// install
//
function install(gyp, argv, callback) {
const package_json = gyp.package_json;
const napi_build_version = napi.get_napi_build_version_from_command_args(argv);
const source_build = gyp.opts['build-from-source'] || gyp.opts.build_from_source;
const update_binary = gyp.opts['update-binary'] || gyp.opts.update_binary;
const should_do_source_build = source_build === package_json.name || (source_build === true || source_build === 'true');
if (should_do_source_build) {
log.info('build', 'requesting source compile');
return do_build(gyp, argv, callback);
} else {
const fallback_to_build = gyp.opts['fallback-to-build'] || gyp.opts.fallback_to_build;
let should_do_fallback_build = fallback_to_build === package_json.name || (fallback_to_build === true || fallback_to_build === 'true');
// but allow override from npm
if (process.env.npm_config_argv) {
const cooked = JSON.parse(process.env.npm_config_argv).cooked;
const match = cooked.indexOf('--fallback-to-build');
if (match > -1 && cooked.length > match && cooked[match + 1] === 'false') {
should_do_fallback_build = false;
log.info('install', 'Build fallback disabled via npm flag: --fallback-to-build=false');
}
}
let opts;
try {
opts = versioning.evaluate(package_json, gyp.opts, napi_build_version);
} catch (err) {
return callback(err);
}
opts.ca = gyp.opts.ca;
opts.cafile = gyp.opts.cafile;
const from = opts.hosted_tarball;
const to = opts.module_path;
const binary_module = path.join(to, opts.module_name + '.node');
existsAsync(binary_module, (found) => {
if (!update_binary) {
if (found) {
console.log('[' + package_json.name + '] Success: "' + binary_module + '" already installed');
console.log('Pass --update-binary to reinstall or --build-from-source to recompile');
return callback();
}
log.info('check', 'checked for "' + binary_module + '" (not found)');
}
makeDir(to).then(() => {
const fileName = from.startsWith('file://') && from.slice('file://'.length);
if (fileName) {
extract_from_local(fileName, to, after_place);
} else {
place_binary(from, to, opts, after_place);
}
}).catch((err) => {
after_place(err);
});
function after_place(err) {
if (err && should_do_fallback_build) {
print_fallback_error(err, opts, package_json);
return do_build(gyp, argv, callback);
} else if (err) {
return callback(err);
} else {
console.log('[' + package_json.name + '] Success: "' + binary_module + '" is installed via remote');
return callback();
}
}
});
}
}

View File

@@ -1,125 +0,0 @@
'use strict';
/**
* Set the title.
*/
process.title = 'node-pre-gyp';
const node_pre_gyp = require('../');
const log = require('npmlog');
/**
* Process and execute the selected commands.
*/
const prog = new node_pre_gyp.Run({ argv: process.argv });
let completed = false;
if (prog.todo.length === 0) {
if (~process.argv.indexOf('-v') || ~process.argv.indexOf('--version')) {
console.log('v%s', prog.version);
process.exit(0);
} else if (~process.argv.indexOf('-h') || ~process.argv.indexOf('--help')) {
console.log('%s', prog.usage());
process.exit(0);
}
console.log('%s', prog.usage());
process.exit(1);
}
// if --no-color is passed
if (prog.opts && Object.hasOwnProperty.call(prog, 'color') && !prog.opts.color) {
log.disableColor();
}
log.info('it worked if it ends with', 'ok');
log.verbose('cli', process.argv);
log.info('using', process.title + '@%s', prog.version);
log.info('using', 'node@%s | %s | %s', process.versions.node, process.platform, process.arch);
/**
* Change dir if -C/--directory was passed.
*/
const dir = prog.opts.directory;
if (dir) {
const fs = require('fs');
try {
const stat = fs.statSync(dir);
if (stat.isDirectory()) {
log.info('chdir', dir);
process.chdir(dir);
} else {
log.warn('chdir', dir + ' is not a directory');
}
} catch (e) {
if (e.code === 'ENOENT') {
log.warn('chdir', dir + ' is not a directory');
} else {
log.warn('chdir', 'error during chdir() "%s"', e.message);
}
}
}
function run() {
const command = prog.todo.shift();
if (!command) {
// done!
completed = true;
log.info('ok');
return;
}
// set binary.host when appropriate. host determines the s3 target bucket.
const target = prog.setBinaryHostProperty(command.name);
if (target && ['install', 'publish', 'unpublish', 'info'].indexOf(command.name) >= 0) {
log.info('using binary.host: ' + prog.package_json.binary.host);
}
prog.commands[command.name](command.args, function(err) {
if (err) {
log.error(command.name + ' error');
log.error('stack', err.stack);
errorMessage();
log.error('not ok');
console.log(err.message);
return process.exit(1);
}
const args_array = [].slice.call(arguments, 1);
if (args_array.length) {
console.log.apply(console, args_array);
}
// now run the next command in the queue
process.nextTick(run);
});
}
process.on('exit', (code) => {
if (!completed && !code) {
log.error('Completion callback never invoked!');
errorMessage();
process.exit(6);
}
});
process.on('uncaughtException', (err) => {
log.error('UNCAUGHT EXCEPTION');
log.error('stack', err.stack);
errorMessage();
process.exit(7);
});
function errorMessage() {
// copied from npm's lib/util/error-handler.js
const os = require('os');
log.error('System', os.type() + ' ' + os.release());
log.error('command', process.argv.map(JSON.stringify).join(' '));
log.error('cwd', process.cwd());
log.error('node -v', process.version);
log.error(process.title + ' -v', 'v' + prog.package.version);
}
// start running the given commands!
run();

View File

@@ -1,309 +0,0 @@
'use strict';
/**
* Module exports.
*/
module.exports = exports;
/**
* Module dependencies.
*/
// load mocking control function for accessing s3 via https. the function is a noop always returning
// false if not mocking.
exports.mockS3Http = require('./util/s3_setup').get_mockS3Http();
exports.mockS3Http('on');
const mocking = exports.mockS3Http('get');
const fs = require('fs');
const path = require('path');
const nopt = require('nopt');
const log = require('npmlog');
log.disableProgress();
const napi = require('./util/napi.js');
const EE = require('events').EventEmitter;
const inherits = require('util').inherits;
const cli_commands = [
'clean',
'install',
'reinstall',
'build',
'rebuild',
'package',
'testpackage',
'publish',
'unpublish',
'info',
'testbinary',
'reveal',
'configure'
];
const aliases = {};
// differentiate node-pre-gyp's logs from npm's
log.heading = 'node-pre-gyp';
if (mocking) {
log.warn(`mocking s3 to ${process.env.node_pre_gyp_mock_s3}`);
}
// this is a getter to avoid circular reference warnings with node v14.
Object.defineProperty(exports, 'find', {
get: function() {
return require('./pre-binding').find;
},
enumerable: true
});
// in the following, "my_module" is using node-pre-gyp to
// prebuild and install pre-built binaries. "main_module"
// is using "my_module".
//
// "bin/node-pre-gyp" invokes Run() without a path. the
// expectation is that the working directory is the package
// root "my_module". this is true because in all cases npm is
// executing a script in the context of "my_module".
//
// "pre-binding.find()" is executed by "my_module" but in the
// context of "main_module". this is because "main_module" is
// executing and requires "my_module" which is then executing
// "pre-binding.find()" via "node-pre-gyp.find()", so the working
// directory is that of "main_module".
//
// that's why "find()" must pass the path to package.json.
//
function Run({ package_json_path = './package.json', argv }) {
this.package_json_path = package_json_path;
this.commands = {};
const self = this;
cli_commands.forEach((command) => {
self.commands[command] = function(argvx, callback) {
log.verbose('command', command, argvx);
return require('./' + command)(self, argvx, callback);
};
});
this.parseArgv(argv);
// this is set to true after the binary.host property was set to
// either staging_host or production_host.
this.binaryHostSet = false;
}
inherits(Run, EE);
exports.Run = Run;
const proto = Run.prototype;
/**
* Export the contents of the package.json.
*/
proto.package = require('../package.json');
/**
* nopt configuration definitions
*/
proto.configDefs = {
help: Boolean, // everywhere
arch: String, // 'configure'
debug: Boolean, // 'build'
directory: String, // bin
proxy: String, // 'install'
loglevel: String // everywhere
};
/**
* nopt shorthands
*/
proto.shorthands = {
release: '--no-debug',
C: '--directory',
debug: '--debug',
j: '--jobs',
silent: '--loglevel=silent',
silly: '--loglevel=silly',
verbose: '--loglevel=verbose'
};
/**
* expose the command aliases for the bin file to use.
*/
proto.aliases = aliases;
/**
* Parses the given argv array and sets the 'opts', 'argv',
* 'command', and 'package_json' properties.
*/
proto.parseArgv = function parseOpts(argv) {
this.opts = nopt(this.configDefs, this.shorthands, argv);
this.argv = this.opts.argv.remain.slice();
const commands = this.todo = [];
// create a copy of the argv array with aliases mapped
argv = this.argv.map((arg) => {
// is this an alias?
if (arg in this.aliases) {
arg = this.aliases[arg];
}
return arg;
});
// process the mapped args into "command" objects ("name" and "args" props)
argv.slice().forEach((arg) => {
if (arg in this.commands) {
const args = argv.splice(0, argv.indexOf(arg));
argv.shift();
if (commands.length > 0) {
commands[commands.length - 1].args = args;
}
commands.push({ name: arg, args: [] });
}
});
if (commands.length > 0) {
commands[commands.length - 1].args = argv.splice(0);
}
// if a directory was specified package.json is assumed to be relative
// to it.
let package_json_path = this.package_json_path;
if (this.opts.directory) {
package_json_path = path.join(this.opts.directory, package_json_path);
}
this.package_json = JSON.parse(fs.readFileSync(package_json_path));
// expand commands entries for multiple napi builds
this.todo = napi.expand_commands(this.package_json, this.opts, commands);
// support for inheriting config env variables from npm
const npm_config_prefix = 'npm_config_';
Object.keys(process.env).forEach((name) => {
if (name.indexOf(npm_config_prefix) !== 0) return;
const val = process.env[name];
if (name === npm_config_prefix + 'loglevel') {
log.level = val;
} else {
// add the user-defined options to the config
name = name.substring(npm_config_prefix.length);
// avoid npm argv clobber already present args
// which avoids problem of 'npm test' calling
// script that runs unique npm install commands
if (name === 'argv') {
if (this.opts.argv &&
this.opts.argv.remain &&
this.opts.argv.remain.length) {
// do nothing
} else {
this.opts[name] = val;
}
} else {
this.opts[name] = val;
}
}
});
if (this.opts.loglevel) {
log.level = this.opts.loglevel;
}
log.resume();
};
/**
* allow the binary.host property to be set at execution time.
*
* for this to take effect requires all the following to be true.
* - binary is a property in package.json
* - binary.host is falsey
* - binary.staging_host is not empty
* - binary.production_host is not empty
*
* if any of the previous checks fail then the function returns an empty string
* and makes no changes to package.json's binary property.
*
*
* if command is "publish" then the default is set to "binary.staging_host"
* if command is not "publish" the the default is set to "binary.production_host"
*
* if the command-line option '--s3_host' is set to "staging" or "production" then
* "binary.host" is set to the specified "staging_host" or "production_host". if
* '--s3_host' is any other value an exception is thrown.
*
* if '--s3_host' is not present then "binary.host" is set to the default as above.
*
* this strategy was chosen so that any command other than "publish" or "unpublish" uses "production"
* as the default without requiring any command-line options but that "publish" and "unpublish" require
* '--s3_host production_host' to be specified in order to *really* publish (or unpublish). publishing
* to staging can be done freely without worrying about disturbing any production releases.
*/
proto.setBinaryHostProperty = function(command) {
if (this.binaryHostSet) {
return this.package_json.binary.host;
}
const p = this.package_json;
// don't set anything if host is present. it must be left blank to trigger this.
if (!p || !p.binary || p.binary.host) {
return '';
}
// and both staging and production must be present. errors will be reported later.
if (!p.binary.staging_host || !p.binary.production_host) {
return '';
}
let target = 'production_host';
if (command === 'publish' || command === 'unpublish') {
target = 'staging_host';
}
// the environment variable has priority over the default or the command line. if
// either the env var or the command line option are invalid throw an error.
const npg_s3_host = process.env.node_pre_gyp_s3_host;
if (npg_s3_host === 'staging' || npg_s3_host === 'production') {
target = `${npg_s3_host}_host`;
} else if (this.opts['s3_host'] === 'staging' || this.opts['s3_host'] === 'production') {
target = `${this.opts['s3_host']}_host`;
} else if (this.opts['s3_host'] || npg_s3_host) {
throw new Error(`invalid s3_host ${this.opts['s3_host'] || npg_s3_host}`);
}
p.binary.host = p.binary[target];
this.binaryHostSet = true;
return p.binary.host;
};
/**
* Returns the usage instructions for node-pre-gyp.
*/
proto.usage = function usage() {
const str = [
'',
' Usage: node-pre-gyp <command> [options]',
'',
' where <command> is one of:',
cli_commands.map((c) => {
return ' - ' + c + ' - ' + require('./' + c).usage;
}).join('\n'),
'',
'node-pre-gyp@' + this.version + ' ' + path.resolve(__dirname, '..'),
'node@' + process.versions.node
].join('\n');
return str;
};
/**
* Version number getter.
*/
Object.defineProperty(proto, 'version', {
get: function() {
return this.package.version;
},
enumerable: true
});

View File

@@ -1,73 +0,0 @@
'use strict';
module.exports = exports = _package;
exports.usage = 'Packs binary (and enclosing directory) into locally staged tarball';
const fs = require('fs');
const path = require('path');
const log = require('npmlog');
const versioning = require('./util/versioning.js');
const napi = require('./util/napi.js');
const existsAsync = fs.exists || path.exists;
const makeDir = require('make-dir');
const tar = require('tar');
function readdirSync(dir) {
let list = [];
const files = fs.readdirSync(dir);
files.forEach((file) => {
const stats = fs.lstatSync(path.join(dir, file));
if (stats.isDirectory()) {
list = list.concat(readdirSync(path.join(dir, file)));
} else {
list.push(path.join(dir, file));
}
});
return list;
}
function _package(gyp, argv, callback) {
const package_json = gyp.package_json;
const napi_build_version = napi.get_napi_build_version_from_command_args(argv);
const opts = versioning.evaluate(package_json, gyp.opts, napi_build_version);
const from = opts.module_path;
const binary_module = path.join(from, opts.module_name + '.node');
existsAsync(binary_module, (found) => {
if (!found) {
return callback(new Error('Cannot package because ' + binary_module + ' missing: run `node-pre-gyp rebuild` first'));
}
const tarball = opts.staged_tarball;
const filter_func = function(entry) {
const basename = path.basename(entry);
if (basename.length && basename[0] !== '.') {
console.log('packing ' + entry);
return true;
} else {
console.log('skipping ' + entry);
}
return false;
};
makeDir(path.dirname(tarball)).then(() => {
let files = readdirSync(from);
const base = path.basename(from);
files = files.map((file) => {
return path.join(base, path.relative(from, file));
});
tar.create({
portable: false,
gzip: true,
filter: filter_func,
file: tarball,
cwd: path.dirname(from)
}, files, (err2) => {
if (err2) console.error('[' + package_json.name + '] ' + err2.message);
else log.info('package', 'Binary staged at "' + tarball + '"');
return callback(err2);
});
}).catch((err) => {
return callback(err);
});
});
}

View File

@@ -1,34 +0,0 @@
'use strict';
const npg = require('..');
const versioning = require('../lib/util/versioning.js');
const napi = require('../lib/util/napi.js');
const existsSync = require('fs').existsSync || require('path').existsSync;
const path = require('path');
module.exports = exports;
exports.usage = 'Finds the require path for the node-pre-gyp installed module';
exports.validate = function(package_json, opts) {
versioning.validate_config(package_json, opts);
};
exports.find = function(package_json_path, opts) {
if (!existsSync(package_json_path)) {
throw new Error(package_json_path + 'does not exist');
}
const prog = new npg.Run({ package_json_path, argv: process.argv });
prog.setBinaryHostProperty();
const package_json = prog.package_json;
versioning.validate_config(package_json, opts);
let napi_build_version;
if (napi.get_napi_build_versions(package_json, opts)) {
napi_build_version = napi.get_best_napi_build_version(package_json, opts);
}
opts = opts || {};
if (!opts.module_root) opts.module_root = path.dirname(package_json_path);
const meta = versioning.evaluate(package_json, opts, napi_build_version);
return meta.module;
};

View File

@@ -1,81 +0,0 @@
'use strict';
module.exports = exports = publish;
exports.usage = 'Publishes pre-built binary (requires aws-sdk)';
const fs = require('fs');
const path = require('path');
const log = require('npmlog');
const versioning = require('./util/versioning.js');
const napi = require('./util/napi.js');
const s3_setup = require('./util/s3_setup.js');
const existsAsync = fs.exists || path.exists;
const url = require('url');
function publish(gyp, argv, callback) {
const package_json = gyp.package_json;
const napi_build_version = napi.get_napi_build_version_from_command_args(argv);
const opts = versioning.evaluate(package_json, gyp.opts, napi_build_version);
const tarball = opts.staged_tarball;
existsAsync(tarball, (found) => {
if (!found) {
return callback(new Error('Cannot publish because ' + tarball + ' missing: run `node-pre-gyp package` first'));
}
log.info('publish', 'Detecting s3 credentials');
const config = {};
s3_setup.detect(opts, config);
const s3 = s3_setup.get_s3(config);
const key_name = url.resolve(config.prefix, opts.package_name);
const s3_opts = {
Bucket: config.bucket,
Key: key_name
};
log.info('publish', 'Authenticating with s3');
log.info('publish', config);
log.info('publish', 'Checking for existing binary at ' + opts.hosted_path);
s3.headObject(s3_opts, (err, meta) => {
if (meta) log.info('publish', JSON.stringify(meta));
if (err && err.code === 'NotFound') {
// we are safe to publish because
// the object does not already exist
log.info('publish', 'Preparing to put object');
const s3_put_opts = {
ACL: 'public-read',
Body: fs.createReadStream(tarball),
Key: key_name,
Bucket: config.bucket
};
log.info('publish', 'Putting object', s3_put_opts.ACL, s3_put_opts.Bucket, s3_put_opts.Key);
try {
s3.putObject(s3_put_opts, (err2, resp) => {
log.info('publish', 'returned from putting object');
if (err2) {
log.info('publish', 's3 putObject error: "' + err2 + '"');
return callback(err2);
}
if (resp) log.info('publish', 's3 putObject response: "' + JSON.stringify(resp) + '"');
log.info('publish', 'successfully put object');
console.log('[' + package_json.name + '] published to ' + opts.hosted_path);
return callback();
});
} catch (err3) {
log.info('publish', 's3 putObject error: "' + err3 + '"');
return callback(err3);
}
} else if (err) {
log.info('publish', 's3 headObject error: "' + err + '"');
return callback(err);
} else {
log.error('publish', 'Cannot publish over existing version');
log.error('publish', "Update the 'version' field in package.json and try again");
log.error('publish', 'If the previous version was published in error see:');
log.error('publish', '\t node-pre-gyp unpublish');
return callback(new Error('Failed publishing to ' + opts.hosted_path));
}
});
});
}

View File

@@ -1,20 +0,0 @@
'use strict';
module.exports = exports = rebuild;
exports.usage = 'Runs "clean" and "build" at once';
const napi = require('./util/napi.js');
function rebuild(gyp, argv, callback) {
const package_json = gyp.package_json;
let commands = [
{ name: 'clean', args: [] },
{ name: 'build', args: ['rebuild'] }
];
commands = napi.expand_commands(package_json, gyp.opts, commands);
for (let i = commands.length; i !== 0; i--) {
gyp.todo.unshift(commands[i - 1]);
}
process.nextTick(callback);
}

View File

@@ -1,19 +0,0 @@
'use strict';
module.exports = exports = rebuild;
exports.usage = 'Runs "clean" and "install" at once';
const napi = require('./util/napi.js');
function rebuild(gyp, argv, callback) {
const package_json = gyp.package_json;
let installArgs = [];
const napi_build_version = napi.get_best_napi_build_version(package_json, gyp.opts);
if (napi_build_version != null) installArgs = [napi.get_command_arg(napi_build_version)];
gyp.todo.unshift(
{ name: 'clean', args: [] },
{ name: 'install', args: installArgs }
);
process.nextTick(callback);
}

View File

@@ -1,32 +0,0 @@
'use strict';
module.exports = exports = reveal;
exports.usage = 'Reveals data on the versioned binary';
const versioning = require('./util/versioning.js');
const napi = require('./util/napi.js');
function unix_paths(key, val) {
return val && val.replace ? val.replace(/\\/g, '/') : val;
}
function reveal(gyp, argv, callback) {
const package_json = gyp.package_json;
const napi_build_version = napi.get_napi_build_version_from_command_args(argv);
const opts = versioning.evaluate(package_json, gyp.opts, napi_build_version);
let hit = false;
// if a second arg is passed look to see
// if it is a known option
// console.log(JSON.stringify(gyp.opts,null,1))
const remain = gyp.opts.argv.remain[gyp.opts.argv.remain.length - 1];
if (remain && Object.hasOwnProperty.call(opts, remain)) {
console.log(opts[remain].replace(/\\/g, '/'));
hit = true;
}
// otherwise return all options as json
if (!hit) {
console.log(JSON.stringify(opts, unix_paths, 2));
}
return callback();
}

View File

@@ -1,79 +0,0 @@
'use strict';
module.exports = exports = testbinary;
exports.usage = 'Tests that the binary.node can be required';
const path = require('path');
const log = require('npmlog');
const cp = require('child_process');
const versioning = require('./util/versioning.js');
const napi = require('./util/napi.js');
function testbinary(gyp, argv, callback) {
const args = [];
const options = {};
let shell_cmd = process.execPath;
const package_json = gyp.package_json;
const napi_build_version = napi.get_napi_build_version_from_command_args(argv);
const opts = versioning.evaluate(package_json, gyp.opts, napi_build_version);
// skip validation for runtimes we don't explicitly support (like electron)
if (opts.runtime &&
opts.runtime !== 'node-webkit' &&
opts.runtime !== 'node') {
return callback();
}
const nw = (opts.runtime && opts.runtime === 'node-webkit');
// ensure on windows that / are used for require path
const binary_module = opts.module.replace(/\\/g, '/');
if ((process.arch !== opts.target_arch) ||
(process.platform !== opts.target_platform)) {
let msg = 'skipping validation since host platform/arch (';
msg += process.platform + '/' + process.arch + ')';
msg += ' does not match target (';
msg += opts.target_platform + '/' + opts.target_arch + ')';
log.info('validate', msg);
return callback();
}
if (nw) {
options.timeout = 5000;
if (process.platform === 'darwin') {
shell_cmd = 'node-webkit';
} else if (process.platform === 'win32') {
shell_cmd = 'nw.exe';
} else {
shell_cmd = 'nw';
}
const modulePath = path.resolve(binary_module);
const appDir = path.join(__dirname, 'util', 'nw-pre-gyp');
args.push(appDir);
args.push(modulePath);
log.info('validate', "Running test command: '" + shell_cmd + ' ' + args.join(' ') + "'");
cp.execFile(shell_cmd, args, options, (err, stdout, stderr) => {
// check for normal timeout for node-webkit
if (err) {
if (err.killed === true && err.signal && err.signal.indexOf('SIG') > -1) {
return callback();
}
const stderrLog = stderr.toString();
log.info('stderr', stderrLog);
if (/^\s*Xlib:\s*extension\s*"RANDR"\s*missing\s*on\s*display\s*":\d+\.\d+"\.\s*$/.test(stderrLog)) {
log.info('RANDR', 'stderr contains only RANDR error, ignored');
return callback();
}
return callback(err);
}
return callback();
});
return;
}
args.push('--eval');
args.push("require('" + binary_module.replace(/'/g, '\'') + "')");
log.info('validate', "Running test command: '" + shell_cmd + ' ' + args.join(' ') + "'");
cp.execFile(shell_cmd, args, options, (err, stdout, stderr) => {
if (err) {
return callback(err, { stdout: stdout, stderr: stderr });
}
return callback();
});
}

View File

@@ -1,53 +0,0 @@
'use strict';
module.exports = exports = testpackage;
exports.usage = 'Tests that the staged package is valid';
const fs = require('fs');
const path = require('path');
const log = require('npmlog');
const existsAsync = fs.exists || path.exists;
const versioning = require('./util/versioning.js');
const napi = require('./util/napi.js');
const testbinary = require('./testbinary.js');
const tar = require('tar');
const makeDir = require('make-dir');
function testpackage(gyp, argv, callback) {
const package_json = gyp.package_json;
const napi_build_version = napi.get_napi_build_version_from_command_args(argv);
const opts = versioning.evaluate(package_json, gyp.opts, napi_build_version);
const tarball = opts.staged_tarball;
existsAsync(tarball, (found) => {
if (!found) {
return callback(new Error('Cannot test package because ' + tarball + ' missing: run `node-pre-gyp package` first'));
}
const to = opts.module_path;
function filter_func(entry) {
log.info('install', 'unpacking [' + entry.path + ']');
}
makeDir(to).then(() => {
tar.extract({
file: tarball,
cwd: to,
strip: 1,
onentry: filter_func
}).then(after_extract, callback);
}).catch((err) => {
return callback(err);
});
function after_extract() {
testbinary(gyp, argv, (err) => {
if (err) {
return callback(err);
} else {
console.log('[' + package_json.name + '] Package appears valid');
return callback();
}
});
}
});
}

View File

@@ -1,41 +0,0 @@
'use strict';
module.exports = exports = unpublish;
exports.usage = 'Unpublishes pre-built binary (requires aws-sdk)';
const log = require('npmlog');
const versioning = require('./util/versioning.js');
const napi = require('./util/napi.js');
const s3_setup = require('./util/s3_setup.js');
const url = require('url');
function unpublish(gyp, argv, callback) {
const package_json = gyp.package_json;
const napi_build_version = napi.get_napi_build_version_from_command_args(argv);
const opts = versioning.evaluate(package_json, gyp.opts, napi_build_version);
const config = {};
s3_setup.detect(opts, config);
const s3 = s3_setup.get_s3(config);
const key_name = url.resolve(config.prefix, opts.package_name);
const s3_opts = {
Bucket: config.bucket,
Key: key_name
};
s3.headObject(s3_opts, (err, meta) => {
if (err && err.code === 'NotFound') {
console.log('[' + package_json.name + '] Not found: https://' + s3_opts.Bucket + '.s3.amazonaws.com/' + s3_opts.Key);
return callback();
} else if (err) {
return callback(err);
} else {
log.info('unpublish', JSON.stringify(meta));
s3.deleteObject(s3_opts, (err2, resp) => {
if (err2) return callback(err2);
log.info(JSON.stringify(resp));
console.log('[' + package_json.name + '] Success: removed https://' + s3_opts.Bucket + '.s3.amazonaws.com/' + s3_opts.Key);
return callback();
});
}
});
}

File diff suppressed because it is too large Load Diff

View File

@@ -1,93 +0,0 @@
'use strict';
module.exports = exports;
const fs = require('fs');
const path = require('path');
const win = process.platform === 'win32';
const existsSync = fs.existsSync || path.existsSync;
const cp = require('child_process');
// try to build up the complete path to node-gyp
/* priority:
- node-gyp on ENV:npm_config_node_gyp (https://github.com/npm/npm/pull/4887)
- node-gyp on NODE_PATH
- node-gyp inside npm on NODE_PATH (ignore on iojs)
- node-gyp inside npm beside node exe
*/
function which_node_gyp() {
let node_gyp_bin;
if (process.env.npm_config_node_gyp) {
try {
node_gyp_bin = process.env.npm_config_node_gyp;
if (existsSync(node_gyp_bin)) {
return node_gyp_bin;
}
} catch (err) {
// do nothing
}
}
try {
const node_gyp_main = require.resolve('node-gyp'); // eslint-disable-line node/no-missing-require
node_gyp_bin = path.join(path.dirname(
path.dirname(node_gyp_main)),
'bin/node-gyp.js');
if (existsSync(node_gyp_bin)) {
return node_gyp_bin;
}
} catch (err) {
// do nothing
}
if (process.execPath.indexOf('iojs') === -1) {
try {
const npm_main = require.resolve('npm'); // eslint-disable-line node/no-missing-require
node_gyp_bin = path.join(path.dirname(
path.dirname(npm_main)),
'node_modules/node-gyp/bin/node-gyp.js');
if (existsSync(node_gyp_bin)) {
return node_gyp_bin;
}
} catch (err) {
// do nothing
}
}
const npm_base = path.join(path.dirname(
path.dirname(process.execPath)),
'lib/node_modules/npm/');
node_gyp_bin = path.join(npm_base, 'node_modules/node-gyp/bin/node-gyp.js');
if (existsSync(node_gyp_bin)) {
return node_gyp_bin;
}
}
module.exports.run_gyp = function(args, opts, callback) {
let shell_cmd = '';
const cmd_args = [];
if (opts.runtime && opts.runtime === 'node-webkit') {
shell_cmd = 'nw-gyp';
if (win) shell_cmd += '.cmd';
} else {
const node_gyp_path = which_node_gyp();
if (node_gyp_path) {
shell_cmd = process.execPath;
cmd_args.push(node_gyp_path);
} else {
shell_cmd = 'node-gyp';
if (win) shell_cmd += '.cmd';
}
}
const final_args = cmd_args.concat(args);
const cmd = cp.spawn(shell_cmd, final_args, { cwd: undefined, env: process.env, stdio: [0, 1, 2] });
cmd.on('error', (err) => {
if (err) {
return callback(new Error("Failed to execute '" + shell_cmd + ' ' + final_args.join(' ') + "' (" + err + ')'));
}
callback(null, opts);
});
cmd.on('close', (code) => {
if (code && code !== 0) {
return callback(new Error("Failed to execute '" + shell_cmd + ' ' + final_args.join(' ') + "' (" + code + ')'));
}
callback(null, opts);
});
};

View File

@@ -1,102 +0,0 @@
'use strict';
module.exports = exports = handle_gyp_opts;
const versioning = require('./versioning.js');
const napi = require('./napi.js');
/*
Here we gather node-pre-gyp generated options (from versioning) and pass them along to node-gyp.
We massage the args and options slightly to account for differences in what commands mean between
node-pre-gyp and node-gyp (e.g. see the difference between "build" and "rebuild" below)
Keep in mind: the values inside `argv` and `gyp.opts` below are different depending on whether
node-pre-gyp is called directory, or if it is called in a `run-script` phase of npm.
We also try to preserve any command line options that might have been passed to npm or node-pre-gyp.
But this is fairly difficult without passing way to much through. For example `gyp.opts` contains all
the process.env and npm pushes a lot of variables into process.env which node-pre-gyp inherits. So we have
to be very selective about what we pass through.
For example:
`npm install --build-from-source` will give:
argv == [ 'rebuild' ]
gyp.opts.argv == { remain: [ 'install' ],
cooked: [ 'install', '--fallback-to-build' ],
original: [ 'install', '--fallback-to-build' ] }
`./bin/node-pre-gyp build` will give:
argv == []
gyp.opts.argv == { remain: [ 'build' ],
cooked: [ 'build' ],
original: [ '-C', 'test/app1', 'build' ] }
*/
// select set of node-pre-gyp versioning info
// to share with node-gyp
const share_with_node_gyp = [
'module',
'module_name',
'module_path',
'napi_version',
'node_abi_napi',
'napi_build_version',
'node_napi_label'
];
function handle_gyp_opts(gyp, argv, callback) {
// Collect node-pre-gyp specific variables to pass to node-gyp
const node_pre_gyp_options = [];
// generate custom node-pre-gyp versioning info
const napi_build_version = napi.get_napi_build_version_from_command_args(argv);
const opts = versioning.evaluate(gyp.package_json, gyp.opts, napi_build_version);
share_with_node_gyp.forEach((key) => {
const val = opts[key];
if (val) {
node_pre_gyp_options.push('--' + key + '=' + val);
} else if (key === 'napi_build_version') {
node_pre_gyp_options.push('--' + key + '=0');
} else {
if (key !== 'napi_version' && key !== 'node_abi_napi')
return callback(new Error('Option ' + key + ' required but not found by node-pre-gyp'));
}
});
// Collect options that follow the special -- which disables nopt parsing
const unparsed_options = [];
let double_hyphen_found = false;
gyp.opts.argv.original.forEach((opt) => {
if (double_hyphen_found) {
unparsed_options.push(opt);
}
if (opt === '--') {
double_hyphen_found = true;
}
});
// We try respect and pass through remaining command
// line options (like --foo=bar) to node-gyp
const cooked = gyp.opts.argv.cooked;
const node_gyp_options = [];
cooked.forEach((value) => {
if (value.length > 2 && value.slice(0, 2) === '--') {
const key = value.slice(2);
const val = cooked[cooked.indexOf(value) + 1];
if (val && val.indexOf('--') === -1) { // handle '--foo=bar' or ['--foo','bar']
node_gyp_options.push('--' + key + '=' + val);
} else { // pass through --foo
node_gyp_options.push(value);
}
}
});
const result = { 'opts': opts, 'gyp': node_gyp_options, 'pre': node_pre_gyp_options, 'unparsed': unparsed_options };
return callback(null, result);
}

View File

@@ -1,205 +0,0 @@
'use strict';
const fs = require('fs');
module.exports = exports;
const versionArray = process.version
.substr(1)
.replace(/-.*$/, '')
.split('.')
.map((item) => {
return +item;
});
const napi_multiple_commands = [
'build',
'clean',
'configure',
'package',
'publish',
'reveal',
'testbinary',
'testpackage',
'unpublish'
];
const napi_build_version_tag = 'napi_build_version=';
module.exports.get_napi_version = function() {
// returns the non-zero numeric napi version or undefined if napi is not supported.
// correctly supporting target requires an updated cross-walk
let version = process.versions.napi; // can be undefined
if (!version) { // this code should never need to be updated
if (versionArray[0] === 9 && versionArray[1] >= 3) version = 2; // 9.3.0+
else if (versionArray[0] === 8) version = 1; // 8.0.0+
}
return version;
};
module.exports.get_napi_version_as_string = function(target) {
// returns the napi version as a string or an empty string if napi is not supported.
const version = module.exports.get_napi_version(target);
return version ? '' + version : '';
};
module.exports.validate_package_json = function(package_json, opts) { // throws Error
const binary = package_json.binary;
const module_path_ok = pathOK(binary.module_path);
const remote_path_ok = pathOK(binary.remote_path);
const package_name_ok = pathOK(binary.package_name);
const napi_build_versions = module.exports.get_napi_build_versions(package_json, opts, true);
const napi_build_versions_raw = module.exports.get_napi_build_versions_raw(package_json);
if (napi_build_versions) {
napi_build_versions.forEach((napi_build_version)=> {
if (!(parseInt(napi_build_version, 10) === napi_build_version && napi_build_version > 0)) {
throw new Error('All values specified in napi_versions must be positive integers.');
}
});
}
if (napi_build_versions && (!module_path_ok || (!remote_path_ok && !package_name_ok))) {
throw new Error('When napi_versions is specified; module_path and either remote_path or ' +
"package_name must contain the substitution string '{napi_build_version}`.");
}
if ((module_path_ok || remote_path_ok || package_name_ok) && !napi_build_versions_raw) {
throw new Error("When the substitution string '{napi_build_version}` is specified in " +
'module_path, remote_path, or package_name; napi_versions must also be specified.');
}
if (napi_build_versions && !module.exports.get_best_napi_build_version(package_json, opts) &&
module.exports.build_napi_only(package_json)) {
throw new Error(
'The Node-API version of this Node instance is ' + module.exports.get_napi_version(opts ? opts.target : undefined) + '. ' +
'This module supports Node-API version(s) ' + module.exports.get_napi_build_versions_raw(package_json) + '. ' +
'This Node instance cannot run this module.');
}
if (napi_build_versions_raw && !napi_build_versions && module.exports.build_napi_only(package_json)) {
throw new Error(
'The Node-API version of this Node instance is ' + module.exports.get_napi_version(opts ? opts.target : undefined) + '. ' +
'This module supports Node-API version(s) ' + module.exports.get_napi_build_versions_raw(package_json) + '. ' +
'This Node instance cannot run this module.');
}
};
function pathOK(path) {
return path && (path.indexOf('{napi_build_version}') !== -1 || path.indexOf('{node_napi_label}') !== -1);
}
module.exports.expand_commands = function(package_json, opts, commands) {
const expanded_commands = [];
const napi_build_versions = module.exports.get_napi_build_versions(package_json, opts);
commands.forEach((command)=> {
if (napi_build_versions && command.name === 'install') {
const napi_build_version = module.exports.get_best_napi_build_version(package_json, opts);
const args = napi_build_version ? [napi_build_version_tag + napi_build_version] : [];
expanded_commands.push({ name: command.name, args: args });
} else if (napi_build_versions && napi_multiple_commands.indexOf(command.name) !== -1) {
napi_build_versions.forEach((napi_build_version)=> {
const args = command.args.slice();
args.push(napi_build_version_tag + napi_build_version);
expanded_commands.push({ name: command.name, args: args });
});
} else {
expanded_commands.push(command);
}
});
return expanded_commands;
};
module.exports.get_napi_build_versions = function(package_json, opts, warnings) { // opts may be undefined
const log = require('npmlog');
let napi_build_versions = [];
const supported_napi_version = module.exports.get_napi_version(opts ? opts.target : undefined);
// remove duplicates, verify each napi version can actaully be built
if (package_json.binary && package_json.binary.napi_versions) {
package_json.binary.napi_versions.forEach((napi_version) => {
const duplicated = napi_build_versions.indexOf(napi_version) !== -1;
if (!duplicated && supported_napi_version && napi_version <= supported_napi_version) {
napi_build_versions.push(napi_version);
} else if (warnings && !duplicated && supported_napi_version) {
log.info('This Node instance does not support builds for Node-API version', napi_version);
}
});
}
if (opts && opts['build-latest-napi-version-only']) {
let latest_version = 0;
napi_build_versions.forEach((napi_version) => {
if (napi_version > latest_version) latest_version = napi_version;
});
napi_build_versions = latest_version ? [latest_version] : [];
}
return napi_build_versions.length ? napi_build_versions : undefined;
};
module.exports.get_napi_build_versions_raw = function(package_json) {
const napi_build_versions = [];
// remove duplicates
if (package_json.binary && package_json.binary.napi_versions) {
package_json.binary.napi_versions.forEach((napi_version) => {
if (napi_build_versions.indexOf(napi_version) === -1) {
napi_build_versions.push(napi_version);
}
});
}
return napi_build_versions.length ? napi_build_versions : undefined;
};
module.exports.get_command_arg = function(napi_build_version) {
return napi_build_version_tag + napi_build_version;
};
module.exports.get_napi_build_version_from_command_args = function(command_args) {
for (let i = 0; i < command_args.length; i++) {
const arg = command_args[i];
if (arg.indexOf(napi_build_version_tag) === 0) {
return parseInt(arg.substr(napi_build_version_tag.length), 10);
}
}
return undefined;
};
module.exports.swap_build_dir_out = function(napi_build_version) {
if (napi_build_version) {
const rm = require('rimraf');
rm.sync(module.exports.get_build_dir(napi_build_version));
fs.renameSync('build', module.exports.get_build_dir(napi_build_version));
}
};
module.exports.swap_build_dir_in = function(napi_build_version) {
if (napi_build_version) {
const rm = require('rimraf');
rm.sync('build');
fs.renameSync(module.exports.get_build_dir(napi_build_version), 'build');
}
};
module.exports.get_build_dir = function(napi_build_version) {
return 'build-tmp-napi-v' + napi_build_version;
};
module.exports.get_best_napi_build_version = function(package_json, opts) {
let best_napi_build_version = 0;
const napi_build_versions = module.exports.get_napi_build_versions(package_json, opts);
if (napi_build_versions) {
const our_napi_version = module.exports.get_napi_version(opts ? opts.target : undefined);
napi_build_versions.forEach((napi_build_version)=> {
if (napi_build_version > best_napi_build_version &&
napi_build_version <= our_napi_version) {
best_napi_build_version = napi_build_version;
}
});
}
return best_napi_build_version === 0 ? undefined : best_napi_build_version;
};
module.exports.build_napi_only = function(package_json) {
return package_json.binary && package_json.binary.package_name &&
package_json.binary.package_name.indexOf('{node_napi_label}') === -1;
};

View File

@@ -1,26 +0,0 @@
<!doctype html>
<html>
<head>
<meta charset="utf-8">
<title>Node-webkit-based module test</title>
<script>
function nwModuleTest(){
var util = require('util');
var moduleFolder = require('nw.gui').App.argv[0];
try {
require(moduleFolder);
} catch(e) {
if( process.platform !== 'win32' ){
util.log('nw-pre-gyp error:');
util.log(e.stack);
}
process.exit(1);
}
process.exit(0);
}
</script>
</head>
<body onload="nwModuleTest()">
<h1>Node-webkit-based module test</h1>
</body>
</html>

View File

@@ -1,9 +0,0 @@
{
"main": "index.html",
"name": "nw-pre-gyp-module-test",
"description": "Node-webkit-based module test.",
"version": "0.0.1",
"window": {
"show": false
}
}

View File

@@ -1,163 +0,0 @@
'use strict';
module.exports = exports;
const url = require('url');
const fs = require('fs');
const path = require('path');
module.exports.detect = function(opts, config) {
const to = opts.hosted_path;
const uri = url.parse(to);
config.prefix = (!uri.pathname || uri.pathname === '/') ? '' : uri.pathname.replace('/', '');
if (opts.bucket && opts.region) {
config.bucket = opts.bucket;
config.region = opts.region;
config.endpoint = opts.host;
config.s3ForcePathStyle = opts.s3ForcePathStyle;
} else {
const parts = uri.hostname.split('.s3');
const bucket = parts[0];
if (!bucket) {
return;
}
if (!config.bucket) {
config.bucket = bucket;
}
if (!config.region) {
const region = parts[1].slice(1).split('.')[0];
if (region === 'amazonaws') {
config.region = 'us-east-1';
} else {
config.region = region;
}
}
}
};
module.exports.get_s3 = function(config) {
if (process.env.node_pre_gyp_mock_s3) {
// here we're mocking. node_pre_gyp_mock_s3 is the scratch directory
// for the mock code.
const AWSMock = require('mock-aws-s3');
const os = require('os');
AWSMock.config.basePath = `${os.tmpdir()}/mock`;
const s3 = AWSMock.S3();
// wrapped callback maker. fs calls return code of ENOENT but AWS.S3 returns
// NotFound.
const wcb = (fn) => (err, ...args) => {
if (err && err.code === 'ENOENT') {
err.code = 'NotFound';
}
return fn(err, ...args);
};
return {
listObjects(params, callback) {
return s3.listObjects(params, wcb(callback));
},
headObject(params, callback) {
return s3.headObject(params, wcb(callback));
},
deleteObject(params, callback) {
return s3.deleteObject(params, wcb(callback));
},
putObject(params, callback) {
return s3.putObject(params, wcb(callback));
}
};
}
// if not mocking then setup real s3.
const AWS = require('aws-sdk');
AWS.config.update(config);
const s3 = new AWS.S3();
// need to change if additional options need to be specified.
return {
listObjects(params, callback) {
return s3.listObjects(params, callback);
},
headObject(params, callback) {
return s3.headObject(params, callback);
},
deleteObject(params, callback) {
return s3.deleteObject(params, callback);
},
putObject(params, callback) {
return s3.putObject(params, callback);
}
};
};
//
// function to get the mocking control function. if not mocking it returns a no-op.
//
// if mocking it sets up the mock http interceptors that use the mocked s3 file system
// to fulfill reponses.
module.exports.get_mockS3Http = function() {
let mock_s3 = false;
if (!process.env.node_pre_gyp_mock_s3) {
return () => mock_s3;
}
const nock = require('nock');
// the bucket used for testing, as addressed by https.
const host = 'https://mapbox-node-pre-gyp-public-testing-bucket.s3.us-east-1.amazonaws.com';
const mockDir = process.env.node_pre_gyp_mock_s3 + '/mapbox-node-pre-gyp-public-testing-bucket';
// function to setup interceptors. they are "turned off" by setting mock_s3 to false.
const mock_http = () => {
// eslint-disable-next-line no-unused-vars
function get(uri, requestBody) {
const filepath = path.join(mockDir, uri.replace('%2B', '+'));
try {
fs.accessSync(filepath, fs.constants.R_OK);
} catch (e) {
return [404, 'not found\n'];
}
// the mock s3 functions just write to disk, so just read from it.
return [200, fs.createReadStream(filepath)];
}
// eslint-disable-next-line no-unused-vars
return nock(host)
.persist()
.get(() => mock_s3) // mock any uri for s3 when true
.reply(get);
};
// setup interceptors. they check the mock_s3 flag to determine whether to intercept.
mock_http(nock, host, mockDir);
// function to turn matching all requests to s3 on/off.
const mockS3Http = (action) => {
const previous = mock_s3;
if (action === 'off') {
mock_s3 = false;
} else if (action === 'on') {
mock_s3 = true;
} else if (action !== 'get') {
throw new Error(`illegal action for setMockHttp ${action}`);
}
return previous;
};
// call mockS3Http with the argument
// - 'on' - turn it on
// - 'off' - turn it off (used by fetch.test.js so it doesn't interfere with redirects)
// - 'get' - return true or false for 'on' or 'off'
return mockS3Http;
};

View File

@@ -1,335 +0,0 @@
'use strict';
module.exports = exports;
const path = require('path');
const semver = require('semver');
const url = require('url');
const detect_libc = require('detect-libc');
const napi = require('./napi.js');
let abi_crosswalk;
// This is used for unit testing to provide a fake
// ABI crosswalk that emulates one that is not updated
// for the current version
if (process.env.NODE_PRE_GYP_ABI_CROSSWALK) {
abi_crosswalk = require(process.env.NODE_PRE_GYP_ABI_CROSSWALK);
} else {
abi_crosswalk = require('./abi_crosswalk.json');
}
const major_versions = {};
Object.keys(abi_crosswalk).forEach((v) => {
const major = v.split('.')[0];
if (!major_versions[major]) {
major_versions[major] = v;
}
});
function get_electron_abi(runtime, target_version) {
if (!runtime) {
throw new Error('get_electron_abi requires valid runtime arg');
}
if (typeof target_version === 'undefined') {
// erroneous CLI call
throw new Error('Empty target version is not supported if electron is the target.');
}
// Electron guarantees that patch version update won't break native modules.
const sem_ver = semver.parse(target_version);
return runtime + '-v' + sem_ver.major + '.' + sem_ver.minor;
}
module.exports.get_electron_abi = get_electron_abi;
function get_node_webkit_abi(runtime, target_version) {
if (!runtime) {
throw new Error('get_node_webkit_abi requires valid runtime arg');
}
if (typeof target_version === 'undefined') {
// erroneous CLI call
throw new Error('Empty target version is not supported if node-webkit is the target.');
}
return runtime + '-v' + target_version;
}
module.exports.get_node_webkit_abi = get_node_webkit_abi;
function get_node_abi(runtime, versions) {
if (!runtime) {
throw new Error('get_node_abi requires valid runtime arg');
}
if (!versions) {
throw new Error('get_node_abi requires valid process.versions object');
}
const sem_ver = semver.parse(versions.node);
if (sem_ver.major === 0 && sem_ver.minor % 2) { // odd series
// https://github.com/mapbox/node-pre-gyp/issues/124
return runtime + '-v' + versions.node;
} else {
// process.versions.modules added in >= v0.10.4 and v0.11.7
// https://github.com/joyent/node/commit/ccabd4a6fa8a6eb79d29bc3bbe9fe2b6531c2d8e
return versions.modules ? runtime + '-v' + (+versions.modules) :
'v8-' + versions.v8.split('.').slice(0, 2).join('.');
}
}
module.exports.get_node_abi = get_node_abi;
function get_runtime_abi(runtime, target_version) {
if (!runtime) {
throw new Error('get_runtime_abi requires valid runtime arg');
}
if (runtime === 'node-webkit') {
return get_node_webkit_abi(runtime, target_version || process.versions['node-webkit']);
} else if (runtime === 'electron') {
return get_electron_abi(runtime, target_version || process.versions.electron);
} else {
if (runtime !== 'node') {
throw new Error("Unknown Runtime: '" + runtime + "'");
}
if (!target_version) {
return get_node_abi(runtime, process.versions);
} else {
let cross_obj;
// abi_crosswalk generated with ./scripts/abi_crosswalk.js
if (abi_crosswalk[target_version]) {
cross_obj = abi_crosswalk[target_version];
} else {
const target_parts = target_version.split('.').map((i) => { return +i; });
if (target_parts.length !== 3) { // parse failed
throw new Error('Unknown target version: ' + target_version);
}
/*
The below code tries to infer the last known ABI compatible version
that we have recorded in the abi_crosswalk.json when an exact match
is not possible. The reasons for this to exist are complicated:
- We support passing --target to be able to allow developers to package binaries for versions of node
that are not the same one as they are running. This might also be used in combination with the
--target_arch or --target_platform flags to also package binaries for alternative platforms
- When --target is passed we can't therefore determine the ABI (process.versions.modules) from the node
version that is running in memory
- So, therefore node-pre-gyp keeps an "ABI crosswalk" (lib/util/abi_crosswalk.json) to be able to look
this info up for all versions
- But we cannot easily predict what the future ABI will be for released versions
- And node-pre-gyp needs to be a `bundledDependency` in apps that depend on it in order to work correctly
by being fully available at install time.
- So, the speed of node releases and the bundled nature of node-pre-gyp mean that a new node-pre-gyp release
need to happen for every node.js/io.js/node-webkit/nw.js/atom-shell/etc release that might come online if
you want the `--target` flag to keep working for the latest version
- Which is impractical ^^
- Hence the below code guesses about future ABI to make the need to update node-pre-gyp less demanding.
In practice then you can have a dependency of your app like `node-sqlite3` that bundles a `node-pre-gyp` that
only knows about node v0.10.33 in the `abi_crosswalk.json` but target node v0.10.34 (which is assumed to be
ABI compatible with v0.10.33).
TODO: use semver module instead of custom version parsing
*/
const major = target_parts[0];
let minor = target_parts[1];
let patch = target_parts[2];
// io.js: yeah if node.js ever releases 1.x this will break
// but that is unlikely to happen: https://github.com/iojs/io.js/pull/253#issuecomment-69432616
if (major === 1) {
// look for last release that is the same major version
// e.g. we assume io.js 1.x is ABI compatible with >= 1.0.0
while (true) {
if (minor > 0) --minor;
if (patch > 0) --patch;
const new_iojs_target = '' + major + '.' + minor + '.' + patch;
if (abi_crosswalk[new_iojs_target]) {
cross_obj = abi_crosswalk[new_iojs_target];
console.log('Warning: node-pre-gyp could not find exact match for ' + target_version);
console.log('Warning: but node-pre-gyp successfully choose ' + new_iojs_target + ' as ABI compatible target');
break;
}
if (minor === 0 && patch === 0) {
break;
}
}
} else if (major >= 2) {
// look for last release that is the same major version
if (major_versions[major]) {
cross_obj = abi_crosswalk[major_versions[major]];
console.log('Warning: node-pre-gyp could not find exact match for ' + target_version);
console.log('Warning: but node-pre-gyp successfully choose ' + major_versions[major] + ' as ABI compatible target');
}
} else if (major === 0) { // node.js
if (target_parts[1] % 2 === 0) { // for stable/even node.js series
// look for the last release that is the same minor release
// e.g. we assume node 0.10.x is ABI compatible with >= 0.10.0
while (--patch > 0) {
const new_node_target = '' + major + '.' + minor + '.' + patch;
if (abi_crosswalk[new_node_target]) {
cross_obj = abi_crosswalk[new_node_target];
console.log('Warning: node-pre-gyp could not find exact match for ' + target_version);
console.log('Warning: but node-pre-gyp successfully choose ' + new_node_target + ' as ABI compatible target');
break;
}
}
}
}
}
if (!cross_obj) {
throw new Error('Unsupported target version: ' + target_version);
}
// emulate process.versions
const versions_obj = {
node: target_version,
v8: cross_obj.v8 + '.0',
// abi_crosswalk uses 1 for node versions lacking process.versions.modules
// process.versions.modules added in >= v0.10.4 and v0.11.7
modules: cross_obj.node_abi > 1 ? cross_obj.node_abi : undefined
};
return get_node_abi(runtime, versions_obj);
}
}
}
module.exports.get_runtime_abi = get_runtime_abi;
const required_parameters = [
'module_name',
'module_path',
'host'
];
function validate_config(package_json, opts) {
const msg = package_json.name + ' package.json is not node-pre-gyp ready:\n';
const missing = [];
if (!package_json.main) {
missing.push('main');
}
if (!package_json.version) {
missing.push('version');
}
if (!package_json.name) {
missing.push('name');
}
if (!package_json.binary) {
missing.push('binary');
}
const o = package_json.binary;
if (o) {
required_parameters.forEach((p) => {
if (!o[p] || typeof o[p] !== 'string') {
missing.push('binary.' + p);
}
});
}
if (missing.length >= 1) {
throw new Error(msg + 'package.json must declare these properties: \n' + missing.join('\n'));
}
if (o) {
// enforce https over http
const protocol = url.parse(o.host).protocol;
if (protocol === 'http:') {
throw new Error("'host' protocol (" + protocol + ") is invalid - only 'https:' is accepted");
}
}
napi.validate_package_json(package_json, opts);
}
module.exports.validate_config = validate_config;
function eval_template(template, opts) {
Object.keys(opts).forEach((key) => {
const pattern = '{' + key + '}';
while (template.indexOf(pattern) > -1) {
template = template.replace(pattern, opts[key]);
}
});
return template;
}
// url.resolve needs single trailing slash
// to behave correctly, otherwise a double slash
// may end up in the url which breaks requests
// and a lacking slash may not lead to proper joining
function fix_slashes(pathname) {
if (pathname.slice(-1) !== '/') {
return pathname + '/';
}
return pathname;
}
// remove double slashes
// note: path.normalize will not work because
// it will convert forward to back slashes
function drop_double_slashes(pathname) {
return pathname.replace(/\/\//g, '/');
}
function get_process_runtime(versions) {
let runtime = 'node';
if (versions['node-webkit']) {
runtime = 'node-webkit';
} else if (versions.electron) {
runtime = 'electron';
}
return runtime;
}
module.exports.get_process_runtime = get_process_runtime;
const default_package_name = '{module_name}-v{version}-{node_abi}-{platform}-{arch}.tar.gz';
const default_remote_path = '';
module.exports.evaluate = function(package_json, options, napi_build_version) {
options = options || {};
validate_config(package_json, options); // options is a suitable substitute for opts in this case
const v = package_json.version;
const module_version = semver.parse(v);
const runtime = options.runtime || get_process_runtime(process.versions);
const opts = {
name: package_json.name,
configuration: options.debug ? 'Debug' : 'Release',
debug: options.debug,
module_name: package_json.binary.module_name,
version: module_version.version,
prerelease: module_version.prerelease.length ? module_version.prerelease.join('.') : '',
build: module_version.build.length ? module_version.build.join('.') : '',
major: module_version.major,
minor: module_version.minor,
patch: module_version.patch,
runtime: runtime,
node_abi: get_runtime_abi(runtime, options.target),
node_abi_napi: napi.get_napi_version(options.target) ? 'napi' : get_runtime_abi(runtime, options.target),
napi_version: napi.get_napi_version(options.target), // non-zero numeric, undefined if unsupported
napi_build_version: napi_build_version || '',
node_napi_label: napi_build_version ? 'napi-v' + napi_build_version : get_runtime_abi(runtime, options.target),
target: options.target || '',
platform: options.target_platform || process.platform,
target_platform: options.target_platform || process.platform,
arch: options.target_arch || process.arch,
target_arch: options.target_arch || process.arch,
libc: options.target_libc || detect_libc.familySync() || 'unknown',
module_main: package_json.main,
toolset: options.toolset || '', // address https://github.com/mapbox/node-pre-gyp/issues/119
bucket: package_json.binary.bucket,
region: package_json.binary.region,
s3ForcePathStyle: package_json.binary.s3ForcePathStyle || false
};
// support host mirror with npm config `--{module_name}_binary_host_mirror`
// e.g.: https://github.com/node-inspector/v8-profiler/blob/master/package.json#L25
// > npm install v8-profiler --profiler_binary_host_mirror=https://npm.taobao.org/mirrors/node-inspector/
const validModuleName = opts.module_name.replace('-', '_');
const host = process.env['npm_config_' + validModuleName + '_binary_host_mirror'] || package_json.binary.host;
opts.host = fix_slashes(eval_template(host, opts));
opts.module_path = eval_template(package_json.binary.module_path, opts);
// now we resolve the module_path to ensure it is absolute so that binding.gyp variables work predictably
if (options.module_root) {
// resolve relative to known module root: works for pre-binding require
opts.module_path = path.join(options.module_root, opts.module_path);
} else {
// resolve relative to current working directory: works for node-pre-gyp commands
opts.module_path = path.resolve(opts.module_path);
}
opts.module = path.join(opts.module_path, opts.module_name + '.node');
opts.remote_path = package_json.binary.remote_path ? drop_double_slashes(fix_slashes(eval_template(package_json.binary.remote_path, opts))) : default_remote_path;
const package_name = package_json.binary.package_name ? package_json.binary.package_name : default_package_name;
opts.package_name = eval_template(package_name, opts);
opts.staged_tarball = path.join('build/stage', opts.remote_path, opts.package_name);
opts.hosted_path = url.resolve(opts.host, opts.remote_path);
opts.hosted_tarball = url.resolve(opts.hosted_path, opts.package_name);
return opts;
};

View File

@@ -1,22 +0,0 @@
The MIT License (MIT)
Copyright (c) 2016 David Frank
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View File

@@ -1,590 +0,0 @@
node-fetch
==========
[![npm version][npm-image]][npm-url]
[![build status][travis-image]][travis-url]
[![coverage status][codecov-image]][codecov-url]
[![install size][install-size-image]][install-size-url]
[![Discord][discord-image]][discord-url]
A light-weight module that brings `window.fetch` to Node.js
(We are looking for [v2 maintainers and collaborators](https://github.com/bitinn/node-fetch/issues/567))
[![Backers][opencollective-image]][opencollective-url]
<!-- TOC -->
- [Motivation](#motivation)
- [Features](#features)
- [Difference from client-side fetch](#difference-from-client-side-fetch)
- [Installation](#installation)
- [Loading and configuring the module](#loading-and-configuring-the-module)
- [Common Usage](#common-usage)
- [Plain text or HTML](#plain-text-or-html)
- [JSON](#json)
- [Simple Post](#simple-post)
- [Post with JSON](#post-with-json)
- [Post with form parameters](#post-with-form-parameters)
- [Handling exceptions](#handling-exceptions)
- [Handling client and server errors](#handling-client-and-server-errors)
- [Advanced Usage](#advanced-usage)
- [Streams](#streams)
- [Buffer](#buffer)
- [Accessing Headers and other Meta data](#accessing-headers-and-other-meta-data)
- [Extract Set-Cookie Header](#extract-set-cookie-header)
- [Post data using a file stream](#post-data-using-a-file-stream)
- [Post with form-data (detect multipart)](#post-with-form-data-detect-multipart)
- [Request cancellation with AbortSignal](#request-cancellation-with-abortsignal)
- [API](#api)
- [fetch(url[, options])](#fetchurl-options)
- [Options](#options)
- [Class: Request](#class-request)
- [Class: Response](#class-response)
- [Class: Headers](#class-headers)
- [Interface: Body](#interface-body)
- [Class: FetchError](#class-fetcherror)
- [License](#license)
- [Acknowledgement](#acknowledgement)
<!-- /TOC -->
## Motivation
Instead of implementing `XMLHttpRequest` in Node.js to run browser-specific [Fetch polyfill](https://github.com/github/fetch), why not go from native `http` to `fetch` API directly? Hence, `node-fetch`, minimal code for a `window.fetch` compatible API on Node.js runtime.
See Matt Andrews' [isomorphic-fetch](https://github.com/matthew-andrews/isomorphic-fetch) or Leonardo Quixada's [cross-fetch](https://github.com/lquixada/cross-fetch) for isomorphic usage (exports `node-fetch` for server-side, `whatwg-fetch` for client-side).
## Features
- Stay consistent with `window.fetch` API.
- Make conscious trade-off when following [WHATWG fetch spec][whatwg-fetch] and [stream spec](https://streams.spec.whatwg.org/) implementation details, document known differences.
- Use native promise but allow substituting it with [insert your favorite promise library].
- Use native Node streams for body on both request and response.
- Decode content encoding (gzip/deflate) properly and convert string output (such as `res.text()` and `res.json()`) to UTF-8 automatically.
- Useful extensions such as timeout, redirect limit, response size limit, [explicit errors](ERROR-HANDLING.md) for troubleshooting.
## Difference from client-side fetch
- See [Known Differences](LIMITS.md) for details.
- If you happen to use a missing feature that `window.fetch` offers, feel free to open an issue.
- Pull requests are welcomed too!
## Installation
Current stable release (`2.x`)
```sh
$ npm install node-fetch
```
## Loading and configuring the module
We suggest you load the module via `require` until the stabilization of ES modules in node:
```js
const fetch = require('node-fetch');
```
If you are using a Promise library other than native, set it through `fetch.Promise`:
```js
const Bluebird = require('bluebird');
fetch.Promise = Bluebird;
```
## Common Usage
NOTE: The documentation below is up-to-date with `2.x` releases; see the [`1.x` readme](https://github.com/bitinn/node-fetch/blob/1.x/README.md), [changelog](https://github.com/bitinn/node-fetch/blob/1.x/CHANGELOG.md) and [2.x upgrade guide](UPGRADE-GUIDE.md) for the differences.
#### Plain text or HTML
```js
fetch('https://github.com/')
.then(res => res.text())
.then(body => console.log(body));
```
#### JSON
```js
fetch('https://api.github.com/users/github')
.then(res => res.json())
.then(json => console.log(json));
```
#### Simple Post
```js
fetch('https://httpbin.org/post', { method: 'POST', body: 'a=1' })
.then(res => res.json()) // expecting a json response
.then(json => console.log(json));
```
#### Post with JSON
```js
const body = { a: 1 };
fetch('https://httpbin.org/post', {
method: 'post',
body: JSON.stringify(body),
headers: { 'Content-Type': 'application/json' },
})
.then(res => res.json())
.then(json => console.log(json));
```
#### Post with form parameters
`URLSearchParams` is available in Node.js as of v7.5.0. See [official documentation](https://nodejs.org/api/url.html#url_class_urlsearchparams) for more usage methods.
NOTE: The `Content-Type` header is only set automatically to `x-www-form-urlencoded` when an instance of `URLSearchParams` is given as such:
```js
const { URLSearchParams } = require('url');
const params = new URLSearchParams();
params.append('a', 1);
fetch('https://httpbin.org/post', { method: 'POST', body: params })
.then(res => res.json())
.then(json => console.log(json));
```
#### Handling exceptions
NOTE: 3xx-5xx responses are *NOT* exceptions and should be handled in `then()`; see the next section for more information.
Adding a catch to the fetch promise chain will catch *all* exceptions, such as errors originating from node core libraries, network errors and operational errors, which are instances of FetchError. See the [error handling document](ERROR-HANDLING.md) for more details.
```js
fetch('https://domain.invalid/')
.catch(err => console.error(err));
```
#### Handling client and server errors
It is common to create a helper function to check that the response contains no client (4xx) or server (5xx) error responses:
```js
function checkStatus(res) {
if (res.ok) { // res.status >= 200 && res.status < 300
return res;
} else {
throw MyCustomError(res.statusText);
}
}
fetch('https://httpbin.org/status/400')
.then(checkStatus)
.then(res => console.log('will not get here...'))
```
## Advanced Usage
#### Streams
The "Node.js way" is to use streams when possible:
```js
fetch('https://assets-cdn.github.com/images/modules/logos_page/Octocat.png')
.then(res => {
const dest = fs.createWriteStream('./octocat.png');
res.body.pipe(dest);
});
```
#### Buffer
If you prefer to cache binary data in full, use buffer(). (NOTE: `buffer()` is a `node-fetch`-only API)
```js
const fileType = require('file-type');
fetch('https://assets-cdn.github.com/images/modules/logos_page/Octocat.png')
.then(res => res.buffer())
.then(buffer => fileType(buffer))
.then(type => { /* ... */ });
```
#### Accessing Headers and other Meta data
```js
fetch('https://github.com/')
.then(res => {
console.log(res.ok);
console.log(res.status);
console.log(res.statusText);
console.log(res.headers.raw());
console.log(res.headers.get('content-type'));
});
```
#### Extract Set-Cookie Header
Unlike browsers, you can access raw `Set-Cookie` headers manually using `Headers.raw()`. This is a `node-fetch` only API.
```js
fetch(url).then(res => {
// returns an array of values, instead of a string of comma-separated values
console.log(res.headers.raw()['set-cookie']);
});
```
#### Post data using a file stream
```js
const { createReadStream } = require('fs');
const stream = createReadStream('input.txt');
fetch('https://httpbin.org/post', { method: 'POST', body: stream })
.then(res => res.json())
.then(json => console.log(json));
```
#### Post with form-data (detect multipart)
```js
const FormData = require('form-data');
const form = new FormData();
form.append('a', 1);
fetch('https://httpbin.org/post', { method: 'POST', body: form })
.then(res => res.json())
.then(json => console.log(json));
// OR, using custom headers
// NOTE: getHeaders() is non-standard API
const form = new FormData();
form.append('a', 1);
const options = {
method: 'POST',
body: form,
headers: form.getHeaders()
}
fetch('https://httpbin.org/post', options)
.then(res => res.json())
.then(json => console.log(json));
```
#### Request cancellation with AbortSignal
> NOTE: You may cancel streamed requests only on Node >= v8.0.0
You may cancel requests with `AbortController`. A suggested implementation is [`abort-controller`](https://www.npmjs.com/package/abort-controller).
An example of timing out a request after 150ms could be achieved as the following:
```js
import AbortController from 'abort-controller';
const controller = new AbortController();
const timeout = setTimeout(
() => { controller.abort(); },
150,
);
fetch(url, { signal: controller.signal })
.then(res => res.json())
.then(
data => {
useData(data)
},
err => {
if (err.name === 'AbortError') {
// request was aborted
}
},
)
.finally(() => {
clearTimeout(timeout);
});
```
See [test cases](https://github.com/bitinn/node-fetch/blob/master/test/test.js) for more examples.
## API
### fetch(url[, options])
- `url` A string representing the URL for fetching
- `options` [Options](#fetch-options) for the HTTP(S) request
- Returns: <code>Promise&lt;[Response](#class-response)&gt;</code>
Perform an HTTP(S) fetch.
`url` should be an absolute url, such as `https://example.com/`. A path-relative URL (`/file/under/root`) or protocol-relative URL (`//can-be-http-or-https.com/`) will result in a rejected `Promise`.
<a id="fetch-options"></a>
### Options
The default values are shown after each option key.
```js
{
// These properties are part of the Fetch Standard
method: 'GET',
headers: {}, // request headers. format is the identical to that accepted by the Headers constructor (see below)
body: null, // request body. can be null, a string, a Buffer, a Blob, or a Node.js Readable stream
redirect: 'follow', // set to `manual` to extract redirect headers, `error` to reject redirect
signal: null, // pass an instance of AbortSignal to optionally abort requests
// The following properties are node-fetch extensions
follow: 20, // maximum redirect count. 0 to not follow redirect
timeout: 0, // req/res timeout in ms, it resets on redirect. 0 to disable (OS limit applies). Signal is recommended instead.
compress: true, // support gzip/deflate content encoding. false to disable
size: 0, // maximum response body size in bytes. 0 to disable
agent: null // http(s).Agent instance or function that returns an instance (see below)
}
```
##### Default Headers
If no values are set, the following request headers will be sent automatically:
Header | Value
------------------- | --------------------------------------------------------
`Accept-Encoding` | `gzip,deflate` _(when `options.compress === true`)_
`Accept` | `*/*`
`Connection` | `close` _(when no `options.agent` is present)_
`Content-Length` | _(automatically calculated, if possible)_
`Transfer-Encoding` | `chunked` _(when `req.body` is a stream)_
`User-Agent` | `node-fetch/1.0 (+https://github.com/bitinn/node-fetch)`
Note: when `body` is a `Stream`, `Content-Length` is not set automatically.
##### Custom Agent
The `agent` option allows you to specify networking related options which are out of the scope of Fetch, including and not limited to the following:
- Support self-signed certificate
- Use only IPv4 or IPv6
- Custom DNS Lookup
See [`http.Agent`](https://nodejs.org/api/http.html#http_new_agent_options) for more information.
In addition, the `agent` option accepts a function that returns `http`(s)`.Agent` instance given current [URL](https://nodejs.org/api/url.html), this is useful during a redirection chain across HTTP and HTTPS protocol.
```js
const httpAgent = new http.Agent({
keepAlive: true
});
const httpsAgent = new https.Agent({
keepAlive: true
});
const options = {
agent: function (_parsedURL) {
if (_parsedURL.protocol == 'http:') {
return httpAgent;
} else {
return httpsAgent;
}
}
}
```
<a id="class-request"></a>
### Class: Request
An HTTP(S) request containing information about URL, method, headers, and the body. This class implements the [Body](#iface-body) interface.
Due to the nature of Node.js, the following properties are not implemented at this moment:
- `type`
- `destination`
- `referrer`
- `referrerPolicy`
- `mode`
- `credentials`
- `cache`
- `integrity`
- `keepalive`
The following node-fetch extension properties are provided:
- `follow`
- `compress`
- `counter`
- `agent`
See [options](#fetch-options) for exact meaning of these extensions.
#### new Request(input[, options])
<small>*(spec-compliant)*</small>
- `input` A string representing a URL, or another `Request` (which will be cloned)
- `options` [Options][#fetch-options] for the HTTP(S) request
Constructs a new `Request` object. The constructor is identical to that in the [browser](https://developer.mozilla.org/en-US/docs/Web/API/Request/Request).
In most cases, directly `fetch(url, options)` is simpler than creating a `Request` object.
<a id="class-response"></a>
### Class: Response
An HTTP(S) response. This class implements the [Body](#iface-body) interface.
The following properties are not implemented in node-fetch at this moment:
- `Response.error()`
- `Response.redirect()`
- `type`
- `trailer`
#### new Response([body[, options]])
<small>*(spec-compliant)*</small>
- `body` A `String` or [`Readable` stream][node-readable]
- `options` A [`ResponseInit`][response-init] options dictionary
Constructs a new `Response` object. The constructor is identical to that in the [browser](https://developer.mozilla.org/en-US/docs/Web/API/Response/Response).
Because Node.js does not implement service workers (for which this class was designed), one rarely has to construct a `Response` directly.
#### response.ok
<small>*(spec-compliant)*</small>
Convenience property representing if the request ended normally. Will evaluate to true if the response status was greater than or equal to 200 but smaller than 300.
#### response.redirected
<small>*(spec-compliant)*</small>
Convenience property representing if the request has been redirected at least once. Will evaluate to true if the internal redirect counter is greater than 0.
<a id="class-headers"></a>
### Class: Headers
This class allows manipulating and iterating over a set of HTTP headers. All methods specified in the [Fetch Standard][whatwg-fetch] are implemented.
#### new Headers([init])
<small>*(spec-compliant)*</small>
- `init` Optional argument to pre-fill the `Headers` object
Construct a new `Headers` object. `init` can be either `null`, a `Headers` object, an key-value map object or any iterable object.
```js
// Example adapted from https://fetch.spec.whatwg.org/#example-headers-class
const meta = {
'Content-Type': 'text/xml',
'Breaking-Bad': '<3'
};
const headers = new Headers(meta);
// The above is equivalent to
const meta = [
[ 'Content-Type', 'text/xml' ],
[ 'Breaking-Bad', '<3' ]
];
const headers = new Headers(meta);
// You can in fact use any iterable objects, like a Map or even another Headers
const meta = new Map();
meta.set('Content-Type', 'text/xml');
meta.set('Breaking-Bad', '<3');
const headers = new Headers(meta);
const copyOfHeaders = new Headers(headers);
```
<a id="iface-body"></a>
### Interface: Body
`Body` is an abstract interface with methods that are applicable to both `Request` and `Response` classes.
The following methods are not yet implemented in node-fetch at this moment:
- `formData()`
#### body.body
<small>*(deviation from spec)*</small>
* Node.js [`Readable` stream][node-readable]
Data are encapsulated in the `Body` object. Note that while the [Fetch Standard][whatwg-fetch] requires the property to always be a WHATWG `ReadableStream`, in node-fetch it is a Node.js [`Readable` stream][node-readable].
#### body.bodyUsed
<small>*(spec-compliant)*</small>
* `Boolean`
A boolean property for if this body has been consumed. Per the specs, a consumed body cannot be used again.
#### body.arrayBuffer()
#### body.blob()
#### body.json()
#### body.text()
<small>*(spec-compliant)*</small>
* Returns: <code>Promise</code>
Consume the body and return a promise that will resolve to one of these formats.
#### body.buffer()
<small>*(node-fetch extension)*</small>
* Returns: <code>Promise&lt;Buffer&gt;</code>
Consume the body and return a promise that will resolve to a Buffer.
#### body.textConverted()
<small>*(node-fetch extension)*</small>
* Returns: <code>Promise&lt;String&gt;</code>
Identical to `body.text()`, except instead of always converting to UTF-8, encoding sniffing will be performed and text converted to UTF-8 if possible.
(This API requires an optional dependency of the npm package [encoding](https://www.npmjs.com/package/encoding), which you need to install manually. `webpack` users may see [a warning message](https://github.com/bitinn/node-fetch/issues/412#issuecomment-379007792) due to this optional dependency.)
<a id="class-fetcherror"></a>
### Class: FetchError
<small>*(node-fetch extension)*</small>
An operational error in the fetching process. See [ERROR-HANDLING.md][] for more info.
<a id="class-aborterror"></a>
### Class: AbortError
<small>*(node-fetch extension)*</small>
An Error thrown when the request is aborted in response to an `AbortSignal`'s `abort` event. It has a `name` property of `AbortError`. See [ERROR-HANDLING.MD][] for more info.
## Acknowledgement
Thanks to [github/fetch](https://github.com/github/fetch) for providing a solid implementation reference.
`node-fetch` v1 was maintained by [@bitinn](https://github.com/bitinn); v2 was maintained by [@TimothyGu](https://github.com/timothygu), [@bitinn](https://github.com/bitinn) and [@jimmywarting](https://github.com/jimmywarting); v2 readme is written by [@jkantr](https://github.com/jkantr).
## License
MIT
[npm-image]: https://flat.badgen.net/npm/v/node-fetch
[npm-url]: https://www.npmjs.com/package/node-fetch
[travis-image]: https://flat.badgen.net/travis/bitinn/node-fetch
[travis-url]: https://travis-ci.org/bitinn/node-fetch
[codecov-image]: https://flat.badgen.net/codecov/c/github/bitinn/node-fetch/master
[codecov-url]: https://codecov.io/gh/bitinn/node-fetch
[install-size-image]: https://flat.badgen.net/packagephobia/install/node-fetch
[install-size-url]: https://packagephobia.now.sh/result?p=node-fetch
[discord-image]: https://img.shields.io/discord/619915844268326952?color=%237289DA&label=Discord&style=flat-square
[discord-url]: https://discord.gg/Zxbndcm
[opencollective-image]: https://opencollective.com/node-fetch/backers.svg
[opencollective-url]: https://opencollective.com/node-fetch
[whatwg-fetch]: https://fetch.spec.whatwg.org/
[response-init]: https://fetch.spec.whatwg.org/#responseinit
[node-readable]: https://nodejs.org/api/stream.html#stream_readable_streams
[mdn-headers]: https://developer.mozilla.org/en-US/docs/Web/API/Headers
[LIMITS.md]: https://github.com/bitinn/node-fetch/blob/master/LIMITS.md
[ERROR-HANDLING.md]: https://github.com/bitinn/node-fetch/blob/master/ERROR-HANDLING.md
[UPGRADE-GUIDE.md]: https://github.com/bitinn/node-fetch/blob/master/UPGRADE-GUIDE.md

View File

@@ -1,25 +0,0 @@
"use strict";
// ref: https://github.com/tc39/proposal-global
var getGlobal = function () {
// the only reliable means to get the global object is
// `Function('return this')()`
// However, this causes CSP violations in Chrome apps.
if (typeof self !== 'undefined') { return self; }
if (typeof window !== 'undefined') { return window; }
if (typeof global !== 'undefined') { return global; }
throw new Error('unable to locate global object');
}
var global = getGlobal();
module.exports = exports = global.fetch;
// Needed for TypeScript and Webpack.
if (global.fetch) {
exports.default = global.fetch.bind(global);
}
exports.Headers = global.Headers;
exports.Request = global.Request;
exports.Response = global.Response;

Some files were not shown because too many files have changed in this diff Show More