Skip to content
This repository was archived by the owner on Feb 16, 2024. It is now read-only.

[Merged by Bors] - Implement stacks and services commands #36

Closed
wants to merge 194 commits into from

Conversation

sbernauer
Copy link
Member

@sbernauer sbernauer commented Jun 17, 2022

This PR adds two new subcommands:

The structure of the stacks.yaml still needs to be discussed with @lfrancke and might be adopted to the porter.yaml style.
Listing of services is a mechanism i came up with, so is ready to review as is :)

To test the stack command run

cargo r --additional-stack-files stacks.yaml stack install druid-superset-s3 -k

To test the listing of services (this command was run after installing the stack druid-superset-s3 and a bunch of other products)

cargo r -- services ls 
    Finished dev [unoptimized + debuginfo] target(s) in 0.05s
     Running `target/debug/stackablectl services ls`
PRODUCT      NAME                                     NAMESPACE                      ENDPOINTS                                          EXTRA INFOS
airflow      airflow                                  default                        webserver-airflow:   http://172.18.0.4:31206       Admin user: airflow, password: airflow
druid        druid                                    default                        router-http:         http://172.18.0.4:32126       
                                                                                     coordinator-http:    http://172.18.0.4:30347       
hbase        simple-hbase                             default                        regionserver:        172.18.0.4:32413              
                                                                                     ui:                  http://172.18.0.4:32051       
                                                                                     metrics:             172.18.0.4:30504              
hdfs         simple-hdfs                              default                        datanode-default-0-metrics: 172.18.0.4:32607       
                                                                                     datanode-default-0-data: 172.18.0.4:30655          
                                                                                     datanode-default-0-http: http://172.18.0.4:32340   
                                                                                     datanode-default-0-ipc: 172.18.0.4:31295           
                                                                                     namenode-default-0-metrics: 172.18.0.3:31541       
                                                                                     namenode-default-0-http: http://172.18.0.3:31669   
                                                                                     namenode-default-0-rpc: 172.18.0.3:32286           
                                                                                     journalnode-default-0-metrics: 172.18.0.5:30631    
                                                                                     journalnode-default-0-http: http://172.18.0.5:31961 
                                                                                     journalnode-default-0-https: https://172.18.0.5:30133 
                                                                                     journalnode-default-0-rpc: 172.18.0.5:31222        
hive         simple-hive-derby                        default                        hive:                172.18.0.4:30560              
                                                                                     metrics:             172.18.0.4:31796              
superset     superset                                 default                        external-superset:   http://172.18.0.3:30067       Admin user: admin, password: admin
trino        simple-trino                             default                        coordinator-http:    http://172.18.0.5:31395       
                                                                                     coordinator-metrics: 172.18.0.5:32214              
zookeeper    druid-zookeeper                          default                        zk:                  172.18.0.3:32220              
zookeeper    simple-zk                                default                        zk:                  172.18.0.5:32548              
minio        minio-druid                              default                        http:                http://172.18.0.5:30054       Third party service
                                                                                     console-http:        http://172.18.0.5:31771       Admin user: root, password: rootroot

Copy link
Member

@maltesander maltesander left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some more comments...will do some testing now. I think we need a more automated way to provide the product (e.g. druid-superset-s3) yamls.

@maltesander
Copy link
Member

Tested and works fine. The only thing i dislike with the service feature are the warnings if not all operators / crds are installed:

malte@mdesktop ~/d/w/stackablectl (access-services)> stackablectl svc list
[WARN ] Unsuccessful data error parse: 404 page not found
    
[WARN ] Unsuccessful data error parse: 404 page not found
    
[WARN ] Unsuccessful data error parse: 404 page not found
    
[WARN ] Unsuccessful data error parse: 404 page not found
    
[WARN ] Unsuccessful data error parse: 404 page not found
    
[WARN ] Unsuccessful data error parse: 404 page not found
    
[WARN ] Unsuccessful data error parse: 404 page not found
    
[WARN ] Unsuccessful data error parse: 404 page not found
    
 PRODUCT  NAME  NAMESPACE  ENDPOINTS  EXTRA INFOS 

This is coming from kube-rs (kube-rs/kube#949, kube-rs/kube#948).

In main.rs:

#[tokio::main]
async fn main() -> Result<(), Box<dyn Error>> {
    let args = CliArgs::parse();
    env_logger::builder()
        .format_timestamp(None)
        .format_target(false)
        .filter_level(args.log_level.into())
        .filter(Some("kube_client"), log::LevelFilter::Error)
        .init();

@sbernauer
Copy link
Member Author

I think i found a better solution. I'm using discovery to check if the ProductCRD is installed. The api.list() will only be executed when the ProductCRD is installed :)

@sbernauer sbernauer requested a review from maltesander August 9, 2022 09:17
Copy link
Member

@maltesander maltesander left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM.

@soenkeliebau soenkeliebau dismissed siegfriedweber’s stale review August 9, 2022 10:58

Sigi and Sebastian finished their discussions before Sigi left for vacation, I am dismissing this so we do not have to wait until Sigi's return before merging.

@sbernauer
Copy link
Member Author

Many thanks to all of you!

@sbernauer
Copy link
Member Author

bors r+

bors bot pushed a commit that referenced this pull request Aug 9, 2022
This PR adds two new subcommands:
* Stacks to install ready-to-use product sets
* Services which allows users to list and access the running services (#10)

The structure of the `stacks.yaml` still needs to be discussed with @lfrancke and might be adopted to the porter.yaml style.
Listing of services is a mechanism i came up with, so is ready to review as is :)

To test the stack command run
```
cargo r --additional-stack-files stacks.yaml stack install druid-superset-s3 -k
```
To test the listing of services (this command was run after installing the stack `druid-superset-s3` and a bunch of other products)
```
cargo r -- services ls 
    Finished dev [unoptimized + debuginfo] target(s) in 0.05s
     Running `target/debug/stackablectl services ls`
PRODUCT      NAME                                     NAMESPACE                      ENDPOINTS                                          EXTRA INFOS
airflow      airflow                                  default                        webserver-airflow:   http://172.18.0.4:31206       Admin user: airflow, password: airflow
druid        druid                                    default                        router-http:         http://172.18.0.4:32126       
                                                                                     coordinator-http:    http://172.18.0.4:30347       
hbase        simple-hbase                             default                        regionserver:        172.18.0.4:32413              
                                                                                     ui:                  http://172.18.0.4:32051       
                                                                                     metrics:             172.18.0.4:30504              
hdfs         simple-hdfs                              default                        datanode-default-0-metrics: 172.18.0.4:32607       
                                                                                     datanode-default-0-data: 172.18.0.4:30655          
                                                                                     datanode-default-0-http: http://172.18.0.4:32340   
                                                                                     datanode-default-0-ipc: 172.18.0.4:31295           
                                                                                     namenode-default-0-metrics: 172.18.0.3:31541       
                                                                                     namenode-default-0-http: http://172.18.0.3:31669   
                                                                                     namenode-default-0-rpc: 172.18.0.3:32286           
                                                                                     journalnode-default-0-metrics: 172.18.0.5:30631    
                                                                                     journalnode-default-0-http: http://172.18.0.5:31961 
                                                                                     journalnode-default-0-https: https://172.18.0.5:30133 
                                                                                     journalnode-default-0-rpc: 172.18.0.5:31222        
hive         simple-hive-derby                        default                        hive:                172.18.0.4:30560              
                                                                                     metrics:             172.18.0.4:31796              
superset     superset                                 default                        external-superset:   http://172.18.0.3:30067       Admin user: admin, password: admin
trino        simple-trino                             default                        coordinator-http:    http://172.18.0.5:31395       
                                                                                     coordinator-metrics: 172.18.0.5:32214              
zookeeper    druid-zookeeper                          default                        zk:                  172.18.0.3:32220              
zookeeper    simple-zk                                default                        zk:                  172.18.0.5:32548              
minio        minio-druid                              default                        http:                http://172.18.0.5:30054       Third party service
                                                                                     console-http:        http://172.18.0.5:31771       Admin user: root, password: rootroot
```
@bors
Copy link

bors bot commented Aug 9, 2022

Pull request successfully merged into main.

Build succeeded:

@bors bors bot changed the title Implement stacks and services commands [Merged by Bors] - Implement stacks and services commands Aug 9, 2022
@bors bors bot closed this Aug 9, 2022
@bors bors bot deleted the access-services branch August 9, 2022 11:00
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants